Skip to main content
ABC News
Most Pollsters We Surveyed Say Polling Did Well This Year

After Republicans won Senate, House and statehouse elections in decisive routs Tuesday, my colleague Nate Silver and many others said pre-election polls were skewed toward Democrats. That makes this the second consecutive national election with polls skewed toward one party: In 2012, they zagged toward Republicans.

That prompted us to ask pollsters how they thought they did. We’d conducted three polls of top campaign pollsters before the election, to learn how they did their work, what ethical principles they followed and where they saw their industry heading.1 Now we wanted to know how they assessed the performance of their polls. The first question of our brief post-election poll was, “In light of Tuesday results, did election polls do well this year at depicting the electorate’s views?” Seventeen of the nation’s most prolific political pollsters responded.

table.bialik.pollofpollsters.1106

What they said was a surprise. We checked in on partial results of the poll early Wednesday and reported a few defenses of polls’ performances. When our poll closed, the result was a win as one-sided as Mississippi’s Senate election: 10 answered yes, while just six answered no. (The other response was, “Some did and some didn’t.”) After a night in which results consistently deviated from what polls had suggested — and almost always in the same direction — most pollsters were pretty pleased with how they’d done. Or at least, most pollsters who felt like answering our poll spoke positively about polls’ performance. (You can see our questions at SurveyMonkey and see full results on Github.)

Those who said polls did well had some valid reasons.

Some were answering specifically about their own polls, and not every pollster had a bad night. Understandably, they hadn’t yet had time to study everyone else’s performance. One of the respondents who said polls didn’t do well also focused on results close to home: “We polled in 26 geographies. We embarrassed ourselves in just two of those 26 geographies. But that’s two too many.”

Others said the polls were mostly right, but wrong on some details. “In general the polls did a good job of predicting the winners and losers on Tuesday but they appear to have generally underestimated the magnitude of Republican victories,” Christopher P. Borick of Muhlenberg College, who answered yes, said. (We made the case before returns came in that counting correct calls isn’t one of the better ways to evaluate forecasts; in any case many polls made wrong calls in gubernatorial races.)

Barbara Carvalho of Marist College argued that polls got some trends right: Voters were worried about jobs and the economy, and dissatisfied with Washington, D.C., and President Obama. Women and independents were moving toward the Republicans. Plus, it’s possible polls accurately showed what people thought on the day they were asked, but not on the day they voted. “At the time many of the polls were taken, they showed close races when they were close and several organizations showed movement to the GOP from previous polls closer to Election Day,” Carvalho said.

That isn’t good enough for Matt Towery of InsiderAdvantage. “We captured the trends but not the wave,” he said.

Towery was unusual in his willingness to say openly that polls had a bad night. Four of the seven pollsters who didn’t answer yes requested anonymity. Just two of the other 10 made the same request.

Carvalho said bad polls drowned out good ones in models that combined them, such as FiveThirtyEight’s. She pointed out that some pollsters spotted Republicans’ edge in Iowa, Kansas and Colorado. “Models tended to mute differences and trends … because they were not part of the average,” Carvalho said. “Faster, cheaper, later, needs to be balanced with good science.” The challenge is to identify ahead of time which pollsters are spotting the correct trends and which should be muted.

What will pollsters have to do to get better at spotting correct trends in 2016? I asked what the implications of this year’s results were for the industry. A few pollsters who didn’t want to be named were pessimistic. “Only bad things,” one foresaw. “A lot of head-scratching,” another predicted. A third anticipated more of “the never-ending struggle to determine who will vote, balanced against the pressures of speed and cost. It won’t get any easier.”

One anonymous respondent specifically predicted that pollsters will need to fine-tune their methods of determining who will vote. “The problem in many, if not most, polls is that they decide a priori what the electorate will look like,” the pollster said.

But another respondent said pollsters have a good handle on who comes out to vote in presidential election years such as 2016. “The 2016 electorate will be more like pollsters thought the 2014 would look than the 2014 electorate actually was,” the pollster said. “And thus will ensue another ‘unskewing’ battle from the right.”

Every pollster must work to improve after elections, John Anzalone of Anzalone Liszt Grove Research said. “Each year we as pollsters are going to have to work harder and tighten our procedures and methodology. You can conduct good polling but you have to be willing to spend the money and time to do it right.”

Who got it most right this time around? We asked which polling organizations were particularly successful. The only two votes for an organization the pollster didn’t work for went to Selzer & Company. The Iowa firm looked worthy of its A+ in our ratings by giving Joni Ernst a 7-point lead in the Iowa Senate race when the other 10 most heavily weighted polls in our database showed Ernst trailing, tied or leading by 3 points or less. Ernst won, by 8 points at last count.

Pollsters were less reticent when asked to name pollsters who stumbled. One answered, “Most everyone.” Two specifically named YouGov, and one each named Public Policy Polling, Fox News and Zogby.

Douglas Rivers of YouGov assessed its Senate poll performance in an article published Wednesday on YouGov’s website. He concluded that YouGov’s polls overstated Democrats’ two-party vote share by an average of 2.04 percentage points, less than the seven other pollsters active in at least five Senate races that YouGov assessed. “But this is a larger average bias than we have encountered in previous years,” Rivers wrote. YouGov also had the third-highest mean absolute error.2

Tom Jensen of PPP answered some of our previous polls, but not this one, so I asked him to respond to the mention of his firm. He pointed out that YouGov found PPP had the third-lowest average error of eight polling firms assessed. “We were actually above average,” Jensen said. “We are certainly digging into the data though to see what we could have done differently.”

Brad Coker of Mason-Dixon Polling & Research, Inc., was among the majority who didn’t name any specific winners and losers. “Everyone has good years and bad years,” he said.

There was little consensus about which polling methods yielded the best results. Two said phone polling that included cellphones won the night. “Everyone has one,” Gabriel Joseph of ccAdvertising said of cellphones. “Ignoring this universe of respondents is irresponsible.”

Three pollsters said online polls underperformed, but another said there are online polls no one knows about that were conducted for testing purposes and did well. “We wish we had just stuck to it,” the pollster said. “Our online was right on target but went unreleased.”

Because it’s never too early to turn to 2016, we asked about the Senate race next time. How many seats would Republicans control in 2017? Six gave a number. Three said 51.3 One said 49, another said 48 and Joseph said 57.

Anzalone, a reliable provider of colorful quotes in our polls, said, “Who the fuck knows?” He elaborated, less profanely: “It doesn’t matter how long you have been in this business; there are always going to be surprises each cycle. And I don’t mean just at the end. No one thought six months ago Iowa and Colorado would be in play. It just takes one variable to change the whole equation.”

Language and emotions ran a bit raw after a turbulent night for pollsters. Carvalho enjoyed it. “Love Election Night: lots of numbers, analysis, and little sleep,” she said. “It can’t be beat.”

Anzalone related a different experience. “It was a 24-hour enema,” he said.

Footnotes

  1. We started with the 70 pollsters with the most election polls in our database. Then we reached out to the 62 who are active and reachable. We heard back from 45, including 42 who expressed interest in answering our survey questions. We sent the first poll in September and published the results this month. We put out the second poll starting Oct. 12, using SurveyMonkey to collect the responses. The third poll started Oct. 24, again using SurveyMonkey. Our fourth poll, again using SurveyMonkey, started and ended on Nov. 5. Our 17 respondents include commercial and academic pollsters. Not every respondent answered every question. As with our prior polls, we granted anonymity when requested so our respondents would speak freely.

  2. That includes all error — it misses skewing toward either Democrats or Republicans. YouGov’s Rivers also reported on the company’s polling in all 435 House races. The average poll overstated Democratic two-party vote share by an average of 1.74 percentage points, with a mean absolute error of 3.75 percentage points.

  3. One of them, more precisely, said three fewer than whatever number the GOP ends up with when all of Tuesday’s races are called. That number looks likely to be 54.

Carl Bialik was FiveThirtyEight’s lead writer for news.

Comments