- 11.1 Regression Discontinuity
- 11.2 Estimating the Causal Effect of Gaining a Badge
- 11.3 Interrupted Time Series
- 11.4 Seasonality Decomposition
- 11.5 Actionable Insights
11.2 Estimating the Causal Effect of Gaining a Badge
Let’s say we create a badge for snowmobile enthusiasts who reach 50 points in the first day. Snowmobile enthusiasts can achieve a badge for doing a myriad of things like reviewing snowmobile products, completing their user profile, and signing up for our newsletter. There is no clear way of finding your points as a new user. You’re given the badge if you reach 50 points on the first day. Otherwise, you do not receive the badge.
We want to find the effect on retention of gaining a snowmobile “enthusiast” badge. The “enthusiast” badge has a nice design and can be added to a user’s profile page or flair when reviewing. It can affect how users feel about themselves and how other users view them. The hypothesis is that gaining this enthusiast badge leads to greater user retention in our product.
We decided to test this with an RD design. To ensure the validity of this design, we need to address the following issues:
Does this design meet the requirements of a RD design (i.e., cut point in treatment variable, hypothetical randomness)? Check.
Figure out how large a group we need or is sufficient for the treatment and control.
Create a model to estimate the y-value from either direction at the limit.
Check selection on other variables.
The data has five variables: user_retention (days), score at end of first day, profile_description_length (characters), user_friends (count), and viewed_pages. Note that all the variables, except user days, are all defined at the end of the first day.
In Figure 11.1, user score is plotted against user days in product, which is our core retention metric. The cut point is at 50, as discussed earlier. In this case, it’s not particularly visually clear that there is an effect at the cut point. There does seem to be a difference in the estimates from the linear model as we get closer to the cut point. As we discussed, RD is only defined in the limit, so the closer we get to the cut point, the better the design. This graph is implemented in R in Listing 16.3.
FIGURE 11.1 RD plot of the enthusiast badge example.
11.2.1 Comparing Models
In this section, we’ll apply three models (really six models, since we need to apply the model from both sides) to the data to try to estimate the “causal” effect of gaining a badge on user days in product. The three models are plotted in Figure 11.2. We implement this in R in Listing 16.3 in Chapter 16.
FIGURE 11.2 RD plot of the three models from the left and right side of the cut point.
First, we plot our data with user score on the x-axis and user days on the y-axis. Our outcome variable is user days. We need to estimate the user days, given a score of 50, using the data from both the right and the left.
The first model is an OLS model. We estimate the “causal” effect by estimating two regression models from the right and left sides at the cut point and subtracting the difference of the right model from the left model. The right model estimate at 50 minus the left model estimate at 50 is our LATE. Note that these models have only one x variable; that is, we’re not including any confounders. (Listings 16.2 and 16.3 go through graphing and RD models in R.)
The second modeling type is a quadratic model. We will fit a quadratic model to the right and left sides. For reference, the quadratic fit is based on the quadratic equation, y = ax2 + bx + c.
Finally, we’ll apply a localized regression model or a LOESS (locally estimated polynomial smoothing) model. LOESS is a localized estimator, meaning that we estimate using values within a small range (i.e., the bandwidth), rather than using the full data set as with the normal linear regression. We also weight the points closest more highly, compared to those located farther away. The modeler must set the bandwidth, which is the fraction of the data used to build the model. LOESS, then, fits a low-degree polynomial, generally either linear or quadratic, locally. If you’re interested in a more technical explanation of these methods, check out Hastie et al.’s (2009) The Elements of Statistical Learning.
Generally, LOESS or some type of localized model will provide better estimates, since it’s defined close to the cut point, compared to models over the full range for RD. Be wary of just modeling noise at the cut point as well. In cases of a low number of observations or very high outlier observations, this can radically drive up or down estimates at the cut point.
We find that the three models have different estimates of the LATE. We get the LATE by subtracting the left-hand model at x = 50 from the right-hand model estimate at x = 500 (Table 11.1). To see how this is estimated in R, see Listing 16.3.
Table 11.1 RD-Estimated LATE for OLS, Quadratic, and LOESS Models
|
Left Estimate 50 |
Right Estimate 50 |
LATE Estimate |
---|---|---|---|
OLS model |
4.46 user days |
5.44 user days |
0.98 user day |
Quadratic model |
5.07 user days |
6.72 user days |
1.65 user days |
Loess model |
5.17 user days |
5.75 user days |
0.58 user day |
The OLS model has a LATE estimate of 1 user day increase in retention. The quadratic model LATE shows an increase of 1.65 user days, and the LOESS model LATE estimate is an increase of 0.6 user day. Since the LOESS model is estimating the effect close to the cut point and the LOESS graph looks like it’s adequately fitting the data, we would be inclined to use the LOESS estimates of effect. From this example, you can see the variation of effect size by model is large—a full user day, which could be larger than the actual effect.
Estimating LATE in RD designs can be difficult in case of small sample sizes and clumpiness around the cut point. It’s best to use methods defined as close to the break point as possible.
11.2.2 Checking for Selection in Confounding Variables
We cannot assume that this “causal” effect is real until we check for similar patterns in confounder variables. We can overlay confounders with those who progressed and those who did not, and check for selection at the cut point. We create the graph in Figure 11.3 in R in Chapter 16, Listing 16.4.
FIGURE 11.3 Confounder variables (user friends and length of profile).
In Figure 11.3, we can see that there is selection at the cut point in our confounder, length of profile, because there is a huge jump in our estimates at the cut point. Assuming we have completed our profile before we “gain” the badge, this means we are selecting users who have longer profiles and that “getting the badge” is, in fact, not random at the cut point. This would invalidate our design. Basically, users with longer profile descriptions also seem to have a discontinuity with game score. The discontinuity in this graph means that there is selection of users based on other variables into the enthusiasts’ badge.
We can also overlay this plot and see selection with user friends: There is clumpiness of user friends on the left side compared to very few user friends on the right side. Thus, users with more friends are getting the badge, so once again there is nonrandom selection at the cut point.
An alternative hypothesis for the treatment effect could be that users with friends are more engaged in the product and likely to be retained longer, such that the enthusiast badge has no causal effect. At this point, we cannot differentiate between our original hypothesis and this alternative.
We might be able to drop all users with friends and see if this discontinuity at the cut point persists. If it does not, then we could try to estimate the effects on just that subpopulation. In this case, this approach is unlikely to work, as there seems to be a jump in the profile length variable as well.
Here are some tips when working through an RD design to improve “believability” of the results:
Compare a variety of models and sizes of treatment and control group. What’s the right model and the right sizes of groups? No one knows, and the best choices are data-specific. We can compare different types of models and group sizes and see how robust our results are. If we get the same effect sizes with different types of models and treatment and control group sizes, then the effect size is likely to be correct. In such a case, we need to check that it’s not being driven by selection.
Check all potential confounder variables. It is essential to check all confounding variables. If there is at least one confounding variable, the design becomes much harder to believe. We might be able to create a model with the confounder included (if it’s relatively unimportant). In the regression example, for instance, we could add the confounder as the covariate. However, do not be lulled into a false sense of security. This could signal a much larger problem:
Selection at the cut point might actually signify core problems with the randomness assumption. Many RD designs do have confounding at the cut point on many variables because randomness does not exist. For instance, a popular RD design was to look at the effect of winning elections on policy outcomes. It was theorized that close elections were random. However, there was selection on key variables such as campaign donations and incumbency, meaning that candidates who had more money were more likely to get 50.0001% of the vote than 49.9999% of the vote. It could be impossible to control for campaign donations and incumbency advantage, and those two factors could be the driving force behind all policy outcomes you are looking at as an outcome variable. The close election example in majoritarian elections in the United States fails for this reason.
No coverage might signal a lack of support in the data, which means you’ll have to look for a smaller subpopulation to consider. Lack of support means that selection is occurring at the cut point, leaving no users of a certain type in the control group and creating an unrepresentative or unbalanced treatment group. For instance, in our earlier example, perhaps users with friends are all progressing. A practitioner might even want to try statistical matching (described in Chapter 12) over RD. RD sometimes buries the selection issues because many practitioners do not adequately explore all the potential confounders and that step is not mandatory for carrying out the design. In many RD designs, there are confounding variables.