The Monkey Cage Blog was kind enough to run my guest post about the impact of 2012 GOTV efforts [1]. I relied on observational data from Catalist to conduct my analysis (many thanks there) because randomized experiments (which are deservedly the “gold standard”) are unavailable for such a broad investigation. Even with Catalist’s “incredible data”—which is at the individual-level data—the observational analysis is extremely tricky. This wonky, methodological blog post explains why. (Thanks to Mark Mellman—former boss; mentor of mine—and Josh Rosmarin for prompting this train of thought, and to Kevin Collins for helping refine it.)
They key to estimating 2012 GOTV effects is isolating the impact of 2012 campaign activity. Crucially, this means (a) eliminating non-campaign competitiveness effects while (b) not wiping away campaign activity from the data. [2]
When performing battleground state analyses like mine, competitiveness effects are a large confounder. Not only are battleground state voters more likely to cast a ballot because their vote matters more in these competitive states, but this effect should naturally be larger among partisans. To re-state: partisans in battleground states are the voters most affected by non-campaign competitiveness effects, and these are the voters whom campaigns most target. That’s a tricky knot to disentangle.
To alleviate this problem, I control for individual-level a priori turnout score—i.e., the probability (at the beginning of the campaign season) that political practitioners assigned to each voter for the likelihood that she would cast a ballot. The reason this control is so important is that since competitiveness affected the battleground states in 2008, 2004, etc, then partisans are naturally more likely to vote in presidential years. Crucially, this increased probability is reflected in their turnout scores. By controlling for this score, I (attempt to) isolate 2012 campaign effects.
I’ll be honest that there are a multitude of small problems that make this 2008-based turnout score an imperfect control. State competitiveness could have changed from 2008 to 2012 (though it barely did). Voters could have become more (or less) partisan in the intervening four years, thus altering their individual competitiveness effects. Some voters moved from a non-battleground state to a battleground state, and their turnout score might not reflect the effect of this move.
However, those cavils pale in contrast to the issue if controlling for 2008 (via a turnout score) masks 2012 competitiveness effects, shouldn’t this control also mask 2012 campaign effects? After all, Obama and McCain ran full-fledged campaigns in 2008—those effects should be baked into the 2012 turnout score just as I hope the competitiveness effects are. And, if I excuse this issue by claiming (a) 2008 turnout is diluted within the holistic turnout score, (b) some people’s scores will have shifted between elections (thus entering or exiting campaigns’ target universes) and (c) some people have moved between states, then don’t those same excuses mean I similarly failed to fully account for the competitiveness effect?
The good news is that the data do not support this final worry. If the competitiveness effect incidentally dominated my analysis, then the partisan effect would be most observed at the extremes of the scale (as the sporadic voters who are the biggest partisans would be the ones to care most about living a battleground state). However, this pattern is not observed. Thus, I feel fairly confident that I eliminate competitiveness concerns through the turnout score control.
However, my hunch is that by controlling for 2008 turnout (via the turnout score), I do in fact mask some of the 2012 campaign effects, thus biasing my estimates downward. As a cautious person, I’ll take that risk rather than potentially inflating the numbers upward and seeing an effect where there is none. Others may make a different decision.
To reinforce to the idea that its difficult to tease out campaign effects from competitiveness effects, image if Catalist had provided me with a list of voters who had moved from non-battleground states to battleground states between 2008 and 2012. It’s tempting to think that examining those voters’ 2012 actions sheds insight into battleground state campaign effects. Unfortunately, this analysis is not fruitful because these movers’ 2012 turnout patterns reflect both campaign effects and their votes mattering more (or appearing to matter more) in 2012 than they did in 2008 because of competitiveness effects.
All of the above demonstrates why conducting a randomized controlled experiment, in which none of these confounding elements are an issue, is the key to estimating causality. It’s why I’m so glad that a culture of experiments on the Democratic/progressive side–and many props to Malchow, Podhorzer, and everyone at AG/AI for making it happen.
Continue reading →