The impeachment story is blowing up. It’s a high-stakes moment — for President Trump, for Democrats and for pollsters. It’s also a scary moment for polling.
Yes, people who follow politics are now intensely interested in whether the latest developments might shift public opinion about Trump and impeachment. But when news is exceptionally big, a growing body of evidence suggests it can throw off the accuracy of polling itself.
The problem comes from what pollsters call “differential nonresponse bias.” The idea behind this complex-sounding term is fairly straightforward: If partisans on one side of a political question respond to a survey more readily than partisans on the other side, you can get a polling error. The results in your poll won’t match the real-world opinion you’re trying to measure — instead, the poll will be skewed by how willing some people are to respond to a survey.
Some folks see potential for nonresponse to wreak havoc on nearly every poll, but actual evidence of nonresponse bias is hard to come by. There are known patterns in the way people respond to surveys — Americans with more education are more likely to take a poll, for example. But pollsters study these patterns and try to correct for them, which usually allows them to avoid error, though sometimes not.
For the most part, pollsters’ adjustments do a pretty good job of accounting for nonresponse problems, especially in big national polls.
The exception, however, comes with the highest of high-profile news events. We have evidence of several brief episodes of nonresponse error, and all three came in the wake of big news stories near the conclusion of an election — in 2012, 2016 and 2018.
During the 2012 campaign, pollsters who conducted “panel” surveys — polls that recontact previously interviewed respondents to see whether individuals’ opinions change over time — found an unusual result: Republicans were more likely than Democrats to respond to follow-up surveys fielded just after the first presidential debate, which news coverage treated as an unambiguous win for challenger Mitt Romney against then-President Barack Obama. Polls showed Romney chipping away at Obama’s lead. But rather than voters changing their minds about who to vote for (and therefore changing the likely election result), much of the shift may have been caused by a change in what kinds of people were responding to surveys.
During the 2016 campaign, YouGov found Trump’s supporters were less likely than Hillary Clinton supporters to participate in panel reinterviews conducted just after the release of the “Access Hollywood” tape.
Finally, during the 2018 midterms, following the nationally televised congressional hearing in which Christine Blasey Ford leveled accusations of sexual assault against Brett Kavanaugh, who was then a Supreme Court nominee, we saw something similar in the polling I helped conduct at SurveyMonkey.
The survey completion rate jumped 3 percentage points among those who expressed approval of Trump while remaining essentially flat among those who expressed disapproval. The changes in completion rate among Trump approvers vs. disapprovers increased the share of self-identified Republicans and conservatives, non-college-educated women and respondents in rural ZIP codes in our unweighted data.
|Survey Dates||no. of Respondents||Among Trump Approvers||Among Trump Disapprovers|
|July 1-Sept. 26, 2018||230,028||75.3%||74.2%|
|Sept. 27-Oct. 18, 2018||44,552||78.8||75.1|
|Oct. 19-Nov. 7, 2018||52,131||76.1||74.5|
Put more simply, the composition of our samples changed, and even after demographic weighting, the respondent pool included more of the kinds of people who tend to support Trump. As a result, we also saw what appeared to be an increase in Trump’s approval rating among all U.S. adults. Yet that trend was mostly an artifact of the response patterns; it didn’t reflect a meaningful real-world change in how Americans viewed Trump.
Indeed, by the end of October, the higher completion rates among Trump approvers, the change in the demographic and geographic composition of our unweighted data, and the corresponding bump in Trump’s approval rating had all faded, reverting back to where they had been through much of the summer.
What do these phantom trends have in common? Each came about a month before a national election, which may speak to the power of election campaigns in heightening awareness of political news. But more importantly, each involved the kind of story that FiveThirtyEight editor-in-chief Nate Silver recently described, in the context of handling “outlier” polls, as “spectacular, blockbuster news events that dominate the news cycle for a week or more,” the sort of story that happens only once or twice in an election cycle.
The story of Trump’s Ukraine call and the impeachment inquiry is not there yet, in my view, but we are getting closer. For now, the news is big in the political media, but most Americans still don’t know much about it. Whether it grows to the sort of everyone-is-talking-about-it status that the Kavanaugh hearings or the “Access Hollywood” tape reached remains to be seen. A dramatic live-television hearing featuring the Ukraine whistleblower, or the release of further compelling evidence, could push the story into that true blockbuster category.
At that point, it will be critical to watch the polling data for signs of nonresponse bias: Specifically, do polls appear to show sudden and “statistically significant” shifts in party identification?1 If, for example, support for impeachment grows, will it be because Americans are genuinely warming to the idea or simply because Democrats are becoming more likely to take surveys than Republicans?
One test: Check the pollster’s previous surveys to see if the percentages of respondents who identify as Democrats, Republicans and independents have changed notably. Of course, any apparent shifts in partisan identification could be real — party identification is an attitude, so it can and does change — but that balance usually shifts at a glacial pace. A sudden, significant change in the partisan composition of the response pool would be reason for skepticism.2
Also, when big news strikes and there’s reason to worry about response bias, two particular kinds of polls are especially valuable:
First, those that recontact people interviewed previously, like those that detected the phantom trends during the 2012 and 2016 elections, are especially valuable. These panel surveys can directly measure nonresponse bias, and they can also identify real, individual-level changes in public opinion.
Second, polls that use samples drawn from lists of registered voters also have an advantage. They can detect directly whether Democrats tend to respond more readily than Republicans, or vice versa, based on data in official records. If you’re polling people drawn from a list of registered voters, you have more concrete evidence of someone’s partisan loyalty, including party registration in states that record it, primary vote history in other states, or scores that combine all available predictors of partisanship.
Ultimately, should the next polling result on impeachment look like an outlier compared to past results, the best advice is what Nate offered a few weeks ago, but with a twist: Be open to the possibility that the impeachment story is big enough to produce real change, but be cautious. First, be cautious by looking at a polling average that includes the new result instead of just looking at the new result on its own. And second — and this is the twist — stay somewhat skeptical of new polls if this story gets so big that it shows signs of distorting the polls themselves. Either way, it will probably take several weeks, and many new polls, to know for certain.