FiveThirtyEight

The first handful of polling since Tuesday night’s debate is out. But it doesn’t tell a terribly consistent story. Pretty much whichever Democrat you’re rooting for, you can find some polls to be happy about and others that you’d rather ignore. Here’s a quick list of those polls:

As I said, you can cherry-pick your way to pretty much whatever narrative you like. Biden fan? Both those SurveyUSA numbers look nice. Sanders stan? You’ll probably want to emphasize the Ipsos/Reuters poll. Warren aficionado? The SurveyUSA California poll and the Ipsos/FiveThirtyEight poll look good; the others, not so much.

But some of these polls are also pretty confusing. If you’re Sanders, for instance, should you be happy that the Emerson poll in New Hampshire still has you leading, or unhappy that it has you having lost a few points? Or to abstract the question: Should you pay more attention to the trendline within a poll or to the absolute result?

This question does not have a straightforward answer (other than that both are important to some degree). Ideally, you should be comparing a poll not only against the most recent survey by the same pollster, but really against all previous surveys by the pollster in that state — and for that matter also in other states — to detect whether it generally shows good results or poor results for your candidate. And when evaluating trendlines, you should account for when the previous polls were conducted. For example, any poll conducted in October is likely to have shown good results for Warren, since she was at her peak nationally then. So if a new poll came out today showing Warren having fallen by 2 points in Iowa since October, that might be comparatively good news for her since you would have anticipated a steeper decline.

If all this sounds like a lot of work … well, it’s the work that our polling averages and our model are doing for you behind the scenes. Usually our model moves in the direction you might expect intuitively, e.g., Sanders gained ground both in Iowa and in our overall delegate forecast after a Selzer & Co. poll showed him leading the Iowa caucuses.

In the presence of strong house effects, however, the model might move in surprising directions. Just as polls can have house effects in general elections — Rasmussen Reports polls have a notoriously pro-Trump/pro-Republican lean, for example — certain pollsters in the primaries persistently show better or worse results for certain candidates.

And it just so happens that all the pollsters who have released polls since the debate have fairly strong house effects. Emerson College has often shown strong results for Sanders, for instance. And SurveyUSA — both in its California polls and its national polls — has consistently had some of the best numbers for Biden. This is good news for Biden in one sense since SurveyUSA is one of our highest-rated pollsters. But it also means that it isn’t necessarily new news when a SurveyUSA poll comes out showing Biden doing well; such a result will be in line with our model’s expectations. Conversely, Ipsos has consistently shown some of the worst results for Biden, so it doesn’t necessarily move the needle in our model when another Ipsos poll comes out showing Biden doing mediocrely.

To give you a sense of the magnitude that house effects can have, here are the various post-debate polls with and without our model’s house effects adjustment:

House effects can make a big difference

Polls since the January debate, with and without FiveThirtyEight’s adjustments for house effects

View more!

Only candidates polling at 5 percent or more in each survey are shown.

Source: Polls

While the SurveyUSA national poll had Biden at 32 percent and Ipsos had him at 19 percent, the gap is a lot smaller once you account for house effects. The adjustment brings the SurveyUSA poll down to 28 percent and the Ipsos poll up to around 23 percent, a difference that is well within the polls’ sampling error given their respective sample sizes.

To be clear, house effects are not the same thing as statistical bias, which can be evaluated only after a state has conducted its voting. For example, SurveyUSA is implicitly suggesting that Biden is underrated by other pollsters. If they’re wrong about that, SurveyUSA polls will turn out to have had a pro-Biden bias. But if Biden’s results match what SurveyUSA’s polls project, then their polls will have been unbiased and all the other polls will have had an anti-Biden bias. Obviously, we think you should usually trust the polling average — that’s the whole point of averaging or aggregating polls. But especially in the primaries, where turnout is hard to project, it’s also worth paying attention to the differences between polls — and sometimes pollsters with strong house effects (even to the point of being “outliers”) turn out to be correct.


For all that said, polls with strong house effects, because of the additional complications they present, aren’t necessarily ideal for evaluating polling swings following news events such as debates. So while it’s tempting to infer from the polls we have so far that the debate didn’t change things very much — no candidate is consistently seeing their numbers surge or crater — we should wait for a few more polls to confirm that.

In the meantime, our topline forecast is largely unchanged. Biden remains the most likely candidate to win the majority of pledged delegates, with a 41 percent chance, followed by Sanders at 23 percent, Warren at 12 percent and Buttigieg at 9 percent. There is also a 15 percent chance that no one wins a majority, a chance that could increase if Bloomberg, who has now almost caught Buttigieg in our national polling average, continues to rise.


Filed under