Skip to main content
Use of Likely Voter Model Does Not Explain Rasmussen “House Effect”

Both critics and defenders of Rasmussen Reports’ polling have frequently cited Rasmussen’s use of a likely voter model to explain why their polls have tended to show substantially more favorable results for Republican candidates than the average of other surveys. I have often mentioned this myself, for that matter.

The argument goes like this: those people who vote most reliably in midterm elections tend to be older, whiter, and to have higher social status — which are also characteristics of voters that generally lean toward the Republican candidate. When coupled with what also appears to be a Republican enthusiasm advantage this cycle, it is quite reasonable to believe that a poll of likely voters (like Rasmussen’s) should show more favorable results for the Republicans than one of registered voters or adults (like most others).

More Politics

This argument is completely true, insofar as it goes. But it is not sufficient to explain the bulk of the Rasmussen house effect, particularly given that Rasmussen uses a “fairly loose screening process” to select likely voters.

In fact, this is quite readily apparent. Although Rasmussen rarely reveals results for its entire adult sample, rather than that of likely voters, there is one notable exception: its monthly tracking of partisan identification, for which it publishes its results among all adults. Since Labor Day, Rasmussen polls have shown Democrats with a 3.7-point identification advantage among all adults, on average. This is the smallest margin for the Democrats among any of 16 pollsters who have published results on this question, who instead show a Democratic advantage ranging from 5.2 to 13.0 points, with an average of 9.6.

To be clear, the partisan identification advantage among registered or likely voters is much smaller. A 3- or 4- point gap would be quite normal there. When making an apples-to-apples comparison to other polls of all adults, however, it is something of an outlier and would reflect a house effect of about 6 points when measuring the net difference between Democratic and Republican preferences.

Meanwhile, an increasing number of pollsters have begun to publish results among likely voters in their take on the Congressional generic ballot. Six pollsters apart from Rasmussen, in fact (these are GWU, Bloomberg, NPR, Democray Corps, OnMessage and McLaughlin) have done so since December. They show the Republicans leading the generic ballot by an average of 2.8 points among likely voters, on average (if explicitly partisan-affiliated polls are included, the margin is similar at R +3.3). This is a potentially excellent result for them — one which might imply a massive, 50+ seat swing in the House, but is less than the 9-point advantage that Rasmussen now shows, and has shown consistently throughout this period.

Note that the house effect here, again, is about 6 points (the difference between the R+9 that Rasmussen shows and the R+3 that the other likely voter polls do). This is of the same magnitude of the 6-point house effect that was introduced in their construction of the all-adult sample, as described above. In other words, Rasmussen does not appear to be applying an especially stringent likely voter model. Instead, the house effect is endemic to their overall sample construction and is “passed through” to their likely voter sample.

Why might these differences emerge? Raw polling data is pretty dirty. If you just call people up and see who answers the phone, you will tend to get too many women, too many old people, and too many white people. This is especially the case if you rely on a landline sample without a supplement of cellphone voters.

Pollsters try to correct for these deficiencies in a variety of ways. They may use household selection procedures (for instance, asking to speak with the person who has the next birthday). They may leave their poll in the field for several days, calling back when they do not contact their desired respondent. An increasing number may call cellphones in addition to landlines.

Rasmussen does not appear to do any of these things. Their polls are in the field for only one night, leaving little or no time for callbacks. They do not call cellphones. They do not appear to use within-household selection procedures. In addition, their polls use an automated script rather than a live interviewer, which tends to be associated with a lower response rate and which might exacerbate these problems. So Rasmussen’s raw data is likely dirtier than most.

But pollsters then have a second line of defense: they can massage their data by weighting it to known demographics, such as age, race, gender, or geographic location. This can work pretty well, but it is not foolproof; it requires some finesse. Moreover, some differences in response rates may not intersect neatly with these broad demographic categories. Pew has found, for instance, that those people who rely primarily or exclusively on cellphones tend to be somewhat more liberal, even after other demographic considerations are accounted for.

The bottom line is this: the sample included in Rasmussen’s polling is increasingly out of balance with that observed by almost all other pollsters. This appears to create a substantial house effect, irrespective of whether Rasmussen subsequently applies a likely voter screen.

It also appears to be a relatively new facet of their polling. If one looks at the partisan identification among all adults in polls conducted in September-November 2008, Rasmussen gave the Democrats at 6.5-point edge, versus an average of 8.7 points for the other pollsters; their house effect was marginal if there was one at all.

As I’ve speculated before, I suspect this has to do with shifts in enthusiasm among different types of voters: that it’s now become somewhat easier to get Republicans on the phone because they’ve relatively more excited about their political prospects. Techniques like weighting can correct for some of this response bias, but it can be an imperfect defense, particularly for pollsters like Rasmussen who have very low response rates (because of their “flash” one-night samples and their use of IVR technology).

If, on the other hand, this is a feature rather than a bug, it requires a more robust explanation from Rasmussen. It is not sufficient, after all, to believe that Rasmussen is getting it right: you also have to believe that almost everyone else is getting it wrong.

Their use of a likely voter model alone is not sufficient to explain the differences. Citing Rasmussen’s success in calling past election outcomes, which is formidable, is also somewhat non-responsive, since their house effect was not so substantial in past election cycles. Moreover, most objective attempts to rate pollsters, including ours, rely on an evaluation of the accuracy of polls in the week or two immediately preceding an election (when pollsters have strong incentives to “behave” themselves). They may reveal little or nothing about the accuracy of polls months ahead of one.

Nate Silver is the founder and editor in chief of FiveThirtyEight.

Filed under Pollsters 54 posts, Rasmussen 30, Likely Voters 23, Polling Bias 13, Robopolls 11, Weighting 3

Comments Add Comment