Skip to main content
ABC News
Can You Trust Those FireDogLake Polls?

Since November the blog FireDogLake, which can most comfortably be described as belonging to the anti-establishment left, has surveyed four House districts in which there’s a Democratic incumbent via the generally strong pollster SurveyUSA. The Democratic polling firm Public Policy Polling, meanwhile, has surveyed five such districts over the same period. The two sets of surveys paint very different pictures of the ways that incumbent Democrats are performing in their respective House races.

Below, I have summarized the surveys along with PVIs and the Cook Political and CQ Politics race ratings for each district as of 11/1/2009. (The reason I use these somewhat outdated race ratings is because their subsequent ratings may have been affected by these surveys, the reliability of which we are attempting to establish.) In all cases, PPP polled the incumbent Democrat against multiple Republican opponents whereas SurveyUSA tested him against just one. Therefore, I only list the PPP result against the Republican opponent who had the highest name recognition.

These districts, at least superficially, would appear to be pretty similar. The races FDL/SUSA tested had a PVI of R+3 on average, whereas PPP’s districts average to an R+6. Nor is there an obvious difference in the ratings assigned by Cook or CQ. Nevertheless, PPP shows the Democratic candidate leading by 6 points, on average, whereas FDL shows him trailing by 10. That’s a pretty big difference, obviously.

PPP and FDL also tested one district in common, the Arkansas 2nd, where the Democratic incumbent Vic Snyder has since retired. They showed a similarly large difference there, with PPP having Snyder ahead by 1 point and FDL having him down by 17. The FDL survey of AR-2 is about two months more recent; however, both postdate the House’s passage of the health care bill on November 7th (which Snyder voted for). The Democrats’ decline in the generic ballot since November has been about 4 points and would not be enough to explain an 18-point net difference on its own.

So, uh, what gives?

In the past, I have raised questions about the (mis)leading nature of some of the health care questions that FDL asked on its surveys. However, FDL properly asked the horse race question before asking any of the policy questions, and therefore it should not be substantially affected by them. The one potential exception is if (i) there were a lot of drop-offs in the survey because people got fed up with answering the policy questions and (ii) those drop-offs introduced selection bias. Although this is not completely out of the question — the FDL/SurveyUSA polls asked a lot of very wordy questions for an automated poll — I doubt that it would have made a difference of more than a point or two.

Another set of questions about the FDL polls concern their age demographics: they contain an extremely small number of age 18-34 voters, just 1 percent to 5 percent depending on the district. There is a good discussion of this here, here, here and here. To do something halfway between summarize and referee that discussion, what I think we can say is that:

1) The number of young voters included in the SurveyUSA/FDL polls is not realistic, even given absolute-worst-case, zero-degrees Kelvin assumptions about Democratic turnout.
2) This partially results from a set of screening questions employed by SUSA which make it literally impossible for anyone under age 22 to be included in the survey.
3) The more prominent cause, however, is simply that it’s hard to get young voters on the phone these days, especially if your sample does not include cellphones.
4) The polls were not weighted by age, even though SurveyUSA often does weight by age.
5) The decision not to weight by age was made by SurveyUSA and not FDL (I have independently confirmed this via an e-mail discussion with SurveyUSA President Jay Leve).
6) Going back and re-weighting by age, as SurveyUSA has done, does not really work, since the sample sizes are so small that there is not a reliable basis to establish the weightings.
7) With all of the above said, the differences probably shouldn’t amount to more than a net of about 2-5 points, which is nontrivial but would not explain the entirety of the house effect that we observe.

Another difference between the FDL polls and the PPP polls is that the PPP polls are of registered voters whereas FDL tests likely voters; likely voter polls have generally shown much better results for Republicans this cycle. However, the likely voter screen that FDL/SurveyUSA applied is fairly “soft” and asked about past voting behavior (the voter had to have voted in 2008 and either 2006 or 2002) but not the likelihood of voting in the upcoming elections. So it may have made some difference, but not likely as much as a Rasmussen-type likely voter screen might have.

In summary, we have about a 15-point net difference to explain between the two sets of surveys. Of that, probably 2-5 percent probably has to do with SurveyUSA’s sampling issues with young voters. Another 1-4 percent (this is obviously a rough guess) might have to do with the application of a loose likely voter screen in the FDL survey. It’s possible that some additional differences resulted from the battery of questions that FDL/SurveyUSA asked after the horse race question — the long series of questions they asked about health care — if this triggered a substantial a number of drop-offs, although if so the effects are likely fairly small. Finally, some portion of the difference may not result from methodology, and instead may reflect sampling error (i.e. random noise) or differences in the nature of the districts which were surveyed.

To be honest, both sets of polls feel a little off to me — PPP’s on the high side (for Democrats) and FDL’s on the low side. In the PPP universe, it feels like Democrats would be bound to lose only about 20 seats; in the FDL universe, they might lose 60. My feeling is that a loss of about 40 seats is probably the most neutral expectation, so you might sort of split the difference between them, which would imply taking 5-8 points away from the Democrat in the PPP surveys and adding the same margin to their numbers in the FDL surveys.

But who knows. PPP hasn’t really polled a lot of House districts before and the one time they tried, in NY-23, it went badly. SurveyUSA has polled house districts with some regularity and usually done fine. On the other hand, the fact that their age-based demographics were so far off in these polls is a bit concerning and raises questions about their sample selection. Suffice it to say that I think you should treat all polls with skepticism and these are no exception.

The last thing that needs to be said is that whatever numerical differences there are, they’re on the shoulders of SurveyUSA and not FDL. In an e-mail exchange with me, SurveyUSA’s Jay Leve made it quite clear that his firm was responsible for all methodological decisions about its polls.

As regular readers of this website will know, I have very little respect for FDL’s 11-dimensional chess strategies. That includes the decision to poll in districts like these, something which they have every right to do, but the motivation for which pretty clearly seemed to be scaring the Hell out of Democrats in order to implode the health care bill, which FDL opposes. In addition, I spoke with one source who told me that Vic Snyder had not conducted any polling of his own and that his decision to retire may in fact have been motivated in part by the FDL poll. Snyder is the 13th most valuable House Democrat according to our ratings and his decision to retire was a blow for Democrats, although it’s reasonable to surmise that his decision to retire might have come later had it not come sooner.

But … none of that necessarily impacts the accuracy or integrity of the horse race question, which SurveyUSA has signed off upon. And as little respect as I have for FDL is as much respect as I have for SurveyUSA, which is a strong and transparent polling firm.

Going forward, when FiveThityEight evaluates pollsters, it will include all polls included under that pollster’s banner, regardless of the client that the poll is conducted on behalf of. That is, we expect a pollster to do its very best work whenever its brand name is associated with a poll; there are no mulligans. That is Jay Leve’s philosophy at SurveyUSA too, which is very comforting. So I don’t think there’s anything untoward that’s gone on here — although there do appear to be some house effects in these congressional district polls resulting from methodological decisions that SurveyUSA has made.

Nate Silver founded and was the editor in chief of FiveThirtyEight.