- Classifying According to Existing Categories
- Classifying According to Naturally Occurring Clusters
- Some Terminology Problems
Some Terminology Problems
The first main section of this chapter focuses on situations in which you start out knowing which categories some of your records belong to, and you want to know how to classify other records whose categories aren’t yet known. The second main section introduces other techniques that ignore known classifications at first. They seek instead to determine whether classifications emerge from how individual records cluster together due to their similarities on measured, continuous variables. Those techniques include principal component analysis (PCA) and cluster analysis.
Before moving on to Chapter 2, “Logistic Regression,” I want to address some problems with terminology that characterize inferential statistics in general but multivariate statistics in particular, because of the way that variables can switch roles before you’re through with them. These problems can create particular confusion in the exploratory context that often characterizes decision analytics.
The Design Sets the Terms
For a variety of reasons, it’s important to distinguish between an analytic technique (such as univariate ANOVA) and your reason for using it. Suppose you have three treatments you want to test: a new drug, a traditional drug and a placebo. You plan to test whether the treatments have different effects on adult females’ cholesterol levels. You intend to randomly sample your subjects from a population of women whose cholesterol levels are abnormally high, and to randomly assign them to one of your three treatments.
In this context it’s typical and meaningful to refer to the cholesterol measures as a dependent variable. Your hypothesis is that the subjects’ cholesterol levels depend on the treatment to which they are assigned. There is a causal relationship between the dependent variable and the treatment.
For some reason that’s apparently lost to history, the three treatments you have in mind, taken together, are termed an independent variable. There’s nothing independent about it. As the experimenter, you decide what values it takes on (here, new drug, traditional drug, and placebo). You decide (here, randomly) who to assign to which drug. The only rationale for terming it an independent variable seems to be to distinguish it from a dependent variable.
Still, that’s a relatively benign problem. Statistical jargon has many more misleading terms than “independent variable.” But it’s necessary to remember that independent variable and dependent variable belong to your design, not to the statistical procedure. As you conceive of and carry out your experiment, the differences in average cholesterol level among your three treatment groups depend on the three treatments, not on extraneous sources of variation such as subject self-selection or regression toward the mean. Over time, the terms have come to connote the nature of the design: an independent variable, which is under the experimenter’s control, and a dependent variable, which responds in a cause-and-effect fashion to differences in the independent variable.
Now suppose that you decide not to use traditional ANOVA math on your data. The design and management of your experiment is the same as before. But instead of accumulating squared deviations between and within groups, you use one of the available coding methods to represent group membership and pump your data through a multiple regression application.
The results—the sums of squares, the F-ratio, the p value—remain the same as with the traditional ANOVA. More important to this discussion, it’s still appropriate to use the terms dependent variable and independent variable. Your experimental design has not changed—just the way that you do the arithmetic. The differences in the group means on the dependent variable are still caused by the differential effects of the treatments and, to a degree that’s under your control, to the effects of chance.
Causation Versus Prediction
Now, completely alter the rationale for and the design of the research. Instead of researching the causal and differential effects of drugs on cholesterol levels, you’re interested in determining whether a relationship exists between the Dow Jones Industrial Average (DJIA) and other indexes of market valuation, such as the advance-decline (A/D) ratio and the total volume on the New York Stock Exchange. The researcher could easily consult any of hundreds of online sources of historical information regarding the DJIA and associated statistics, such as trading volume and A/D ratios, to pick up tens of thousands of data points.
This is different. Here, the researcher is not in a position to manipulate the values of one or more independent variables, as is the case in a true experiment. The researcher cannot by fiat state that, “The advance-decline ratio shall be 1.5 on September 30,” and observe a resulting change in the DJIA as though there were a causal relationship. Nor is the researcher able to randomly select and assign subjects to one group or another: Membership in the companies that make up the DJIA is largely fixed and certainly beyond the researcher’s control.
There’s nothing intrinsically wrong with this sort of situation, although it’s often referred to, a little insultingly, as a “grab sample.” It’s well suited to making predictions, just not to explaining causation. The researcher can’t directly manipulate the actual values of the predictor variables, but instead can ask, “What value of the DJIA would we expect to see if the trading volume increased by 10%?”
It’s best to avoid terms such as independent variable and dependent variable with data acquired in this way. They imply that the researcher controls the independent variables, and that there is a causal relationship between an independent variable and the dependent variable. The relationship might indeed be causal, but the researcher is not in a position to control an independent variable so as to demonstrate the causality.
To acknowledge that “independent” and “dependent” might not be accurate terms without a randomized study with direct manipulation of an independent variable, many writers have adopted the terms predictor variable to refer, for example, to A/D ratios and trading volume, and predicted variable to refer to the variable they want to predict, such as the DJIA. (You also see terms such as outcome and criterion in place of dependent or predicted, but they just tend to beg the question.)
Why the Terms Matter
Two fundamental reasons explain why I have spent so much space here on what must seem the pedantry of terminology.
One reason is that most of the techniques of decision analytics are used in an exploratory way. You’re looking for combinations of numerically measured variables that, taken together, might explain differences between categories. It’s typical to use data that already exists, often in companies’ operational databases, to search for those relationships and formulate hypotheses accordingly. Only then might you set up a true experiment in which you randomly select and assign subjects to groups and manipulate directly the nature of treatments applied to each group. In this way you hope to confirm your hypotheses, and only then it might be appropriate to imply causation using terms such as independent and dependent.
The second reason is that in at least a major subset of decision analytics work, the variables change horses midstream. As described in previous sections of this chapter on MANOVA and discriminant analysis, you might start out with two or more groups that act as predictor variables and two or more continuous variables that act as predicted variables. MANOVA asks whether any groups differ significantly on one or more predicted variables or on some combination of the predicted variables.
If you get a significant result from the MANOVA, you generally proceed to discriminant analysis, where you seek to determine which continuous variables, alone or in combination, distinguish the groups. In effect, you turn the design end for end. The categories that were the predictors in the MANOVA now constitute the predicted variable in the discriminant analysis. The continuous variables that were the predicted variables in the MANOVA are now the predictor variables in the discriminant analysis.
The situation is clearly impossible if you begin by calling the categories an independent variable in a MANOVA and wind up calling them a dependent variable in a discriminant analysis. It’s just a little confusing, not impossible, if you think of the categories as predictors in the MANOVA and as predicted variables in the subsequent discriminant analysis.
Therefore, I have tried in this book to use predictor variable and predicted variable unless the context makes it clear that an example assumes a true randomized experiment.
Okay, let’s move on to the meat of decision analytics, starting with logistic regression.