Since Election Day, one of the most frequently asked questions I’ve gotten was how pollsters (and FiveThirtyEight) missed the razor-thin re-election of Democratic Sen. Mark Warner in Virginia. Most polling showed Warner with a comfortable lead over Republican Ed Gillespie in the high single- or low double-digits.
Polls sometimes miss — sometimes by a lot. But something else may have also been going on in Virginia. We’ve since learned that at least a couple of surveys in the state were conducted but not released. Gravis Marketing and Hampton University took polls in the final weeks of the campaign and decided not to publish the results.
I wasn’t all that surprised to hear the pollsters kept certain results to themselves. Unfortunately, the practice of spiking polls — an extreme form of pollster herding — seems to be occurring more and more frequently. It happened at least twice before during the 2014 election cycle, and that’s just what we know about.
The race in Virginia illustrates why sweeping polls under the rug that look like outliers is so problematic. We don’t know what the Hampton poll showed, but Doug Kaplan of Gravis has revealed that in his poll there was a tie between Warner and Gillespie. Kaplan, in an email, told me that he didn’t publish the poll because he thought people would dismiss it as an outlier and attack Gravis as inaccurate.
Kaplan should have trusted Gravis’s polling in Virginia as he did in other states. Imagine if Gravis and Hampton publicized surveys that showed Gillespie closing the gap on Warner in the closing days of the campaign. For one thing, the two polls put together — along with a Vox Populi Polling survey giving Warner a 4 percentage-point lead — wouldn’t have looked like such outliers. They would have created a different expectation, and the close result in Virginia wouldn’t have been such a shock.
Methodologically rigorous pollsters such as Charles Franklin in the Wisconsin gubernatorial race, Janine Parry in the Arkansas’s Senate race and Ann Selzer in the Iowa’s Senate race released surveys this fall in key races that showed the Republican candidate doing better than other pollsters indicated. They trusted their data and were rewarded. Those surveys were more accurate than the polling averages. But that’s beside the point: Good pollsters should have outliers. As my colleague Nate Silver noted, “Polling data is noisy. … The occasional or even not-so-occasional result that deviates from the consensus is sometimes a sign the pollster is doing good, honest work.”
Indeed, squashing outliers is a sign a pollster doesn’t believe its data. Why should consumers trust a pollster’s data when the pollster itself doesn’t have the gumption to stand behind it?
As Kaplan acknowledged to me, his biggest mistake in Virginia was doubting his data and his team. Ironically, Gravis would have ended up being the most accurate pollster in Virginia’s Senate race had it published that survey.