Skip to main content
ABC News
Pollsters Say They Follow Ethical Standards, But They Aren’t So Sure About Their Peers

For our second poll of leading U.S. political pollsters, we asked about ethics. The pollsters who answered, even those who asked for anonymity, said they follow basic ethical principles such as not copying others’ work or letting campaigns dictate results. But many held doubts about their peers’ ethics, and about the media’s ability to parse honest, good polls from dishonest, lousy ones.

As with our first poll, we contacted the most active political pollsters in our database.1 This time, 24 responded, on questions about ethics and about FiveThirtyEight’s pollster ratings and Senate forecast. Their answers reveal the thinking of two dozen people who conduct many of the polls that help inform horse-race coverage of major elections. Their work is a primary factor driving our and others’ models forecasting November’s vote for Senate control. (You can see the poll and download the full results of both polls on GitHub.)

table.bialik.pollofpollsters.1020

Earlier this year, my colleague Harry Enten reported on evidence he found suggesting that some pollsters were copying others’ results, revising their numbers to fall into line with those of their peers. I asked our pollsters if any ever do that. All 22 who answered said they don’t. “We are paid to provide our findings and not to be influenced by others,” Bernie Porn of EPIC-MRA said. Another pollster said, “We may be wrong, but we’ll live with that rather than being dishonest.”

John Anzalone, of Anzalone Liszt Grove Research, drew a distinction between revising numbers to match others’ and rejecting polling results that don’t pass the smell test. He pointed out that even an optimal poll will get poor results — off by more than the margin of error — one out of 20 times. “That is just statistics,” Anzalone said. When his firm suspects that’s happened, sometimes “we will go back into the field at our own cost to check.” Conducting a brand-new poll is not at all the same as changing a poll’s findings.

One respondent wasn’t convinced we’d get meaningful results from the question, even though we granted anonymity. “Um, you think anyone is going to answer this honestly?” said the pollster, who didn’t want to be named.

When we asked the same question about other pollsters, our respondents were more inclined to suspect copying. Of the 14 who answered, eight thought some pollsters adjusted their numbers to match published polls’ results. Two people cited Enten’s article as the reason for their belief. Another said, “I’ve just noticed over the years that certain organizations have wild fluctuations in their numbers that often follow the release of a poll with numbers that run counter to what they had recently published.”

Julia Clark of Ipsos answered “No” to the question, saying adjusting results that way would “fly in the face of any kind of good practice, and would become obvious very quickly (and discredited).” However, she added that paying some attention to others’ work is a different matter. “With the renewed focus on Bayesian statistics” — Clark cited this recent New York Times article as an example — “it is inevitable that pollsters will begin to take cues from the marketplace. The form this takes will vary enormously by pollster.”

Many pollsters agreed with Clark that consulting their peers’ results was sensible. Of 21 who answered a question about whether they do this, four said “Yes” and six said “Sometimes.”

“We do it to anticipate potential questions or backlash, and to help our clients plan for this,” Clark said. “It would be crazy not to look at where the market is before publication!”

Speaking for the narrow majority who don’t check others’ work, Andrew Smith of the University of New Hampshire said, “Trust your methods, trust your numbers.” Another pollster added, “We just take our best shot. I don’t really want to care what other polls say.”

We also asked our respondents who poll for campaigns whether they ever let those campaigns dictate weighting or other factors that can affect results. All 15 who answered said they don’t. “My job with any client is to tell them what they need to know and not what they want to hear,” Brad Coker of Mason-Dixon Polling & Research, Inc., said.

Several pollsters described how their relationships with campaigns can get a little complicated. Sometimes campaigns want to try out different assumptions, such as about voter turnout, to see how they affect the poll’s results. Pollsters said they are OK with that, so long as these alternative scenarios are for internal consumption only. But if the campaigns want to publish results, the pollsters say they insist the results come from the model they believe. And if the campaigns push back? J. Ann Selzer, of Selzer & Company, considered that hypothetical: “If a campaign tried to publish a poll I did not think reflected an accurate picture, yet they wanted to portray it that way, I reserve the right to comment and clarify. Publicly. Or take my name off the poll, I suppose.”

But again, several pollsters said they think others in the industry don’t adhere to those standards. I asked if there are any polls we shouldn’t be including in our model. “I don’t believe party-affiliated polling firms should be included in the models,” Berwood Yost of Franklin & Marshall College said. “I have a concern that partisans are increasingly producing polls in the hope that they can affect the poll averages in a race.”2

Another pollster said trying to forecast elections simply by averaging polls would fail because there are “too many mysterious polls being released. I think some are being financed by partisan groups without disclosure. A straight average would be skewed simply by one side putting out more polls than the other.”

It’s not just election forecasts that could be unduly swayed by partisans, pollsters said. They also fear the media is too credulous about reporting polls. “Some polls are so biased that they shouldn’t be reported,” Yost said. Anzalone said, simply, “Stop publishing shitty polls.”3

More than partisan bias can undermine a poll’s quality. Both Porn and Darrel Rowland of the Columbus Dispatch said reporters should try to see polls’ question wording and order, because each can influence how people respond. If pollsters won’t share that information, reporters shouldn’t publish their work.

Sometimes, reporters publish results of polls — then hear from polling organizations that they have recalled the poll, like a faulty car. All but three of the 20 pollsters who responded to our question about it said recalling polls was sometimes appropriate. “There is so much emphasis on speed these days that errors inevitably creep in,” Clark said. “It is important to acknowledge them. If that takes the form of a ‘recall,’ so be it.” However, she didn’t think a recall was practical — once a poll’s in the public domain, it can’t be unpublished.

Not everyone focused on the bad actors and errors in their midst. Many emphasized the good, innovative work being done. Pollsters provided lots of nominations when asked to name a polling organization other than their own that is doing the best work. Four named Pew Research Center, with one specifically citing its nonelection work. “Pew has become the gold standard of polling,” Anzalone said.

Selzer and Gallup each got two votes. Public Policy Polling, CBS News/New York Times, NBC News/Wall Street Journal, Ciruli Associates and Democratic pollsters Stanley Greenberg and Michael Bocian each got one.

Ciruli got its vote from a peer who’d never heard of the company but saw it received an A+ in our ratings. That was typical of the generally positive reaction the pollsters had about our ratings — with some reservations.

Of 22 pollsters who answered our question about whether our rating of their organization was fair, just six said it wasn’t. Eight said it was, and six chose “Other.” It’s possible that pollsters who like our ratings are more likely to participate in our survey and to answer this particular question than are those who don’t like them.

Some pollsters raised concerns about the ratings, including:

  • Those who poll for campaigns or otherwise conduct private polls don’t get credit for those.
  • Pollsters who don’t allocate undecided voters to candidates may make different predictions about the final vote margin than their official polling numbers suggest.
  • Our database goes back to 1998, so pollsters that have changed their methods recently and gotten improved results don’t get full credit. (We do weight more recent polls more heavily.)
  • Pollsters “are so close together that the grading cuts are arbitrary,” Patrick Murray of Monmouth University said.4

Selzer said she agreed with our rating of her company. “You did not contact us for dates, data — anything. You found what was published. No one would monkey with the findings — they were completely data-driven.”

Then again, Selzer’s grade is one reason she might like our ratings: Her company gets an A+. Nearly all of the pollsters who agreed with our rating of their polling organizations had grades of A- or better; nearly everyone who disagreed scored a C or lower.5

Anything you want to ask our pollsters? We’ll be sending them another survey soon, and are open to suggestions. (We also asked the pollsters to suggest questions for one another, and will use some of those.) Email suggestions to cbialik@fivethirtyeight.com or leave them in the comments.

Footnotes

  1. We started with the 68 pollsters with the most election polls in our database. Then we reached out to the 60 who remain active and reachable. We heard back from 45, including 42 who expressed some interest in answering our survey questions. We sent out the first poll in September, and published the results earlier this month. We sent out the second poll starting Sunday, Oct. 12, using SurveyMonkey to collect the responses. Our 24 respondents include commercial and academic pollsters who identify their polling organizations as liberal, nonpartisan or conservative. Some poll online, some by phone, some both. Not every respondent answered every question.

  2. Smith, of the University of New Hampshire, argued for the exclusion of any Web-based or interactive-voice-response polls. “Both reflect a level of self-selection that makes them convenience samples,” Smith said. “In spite of declining response rates for live-interviewer telephone polls, their methods are far superior to IVR or Web surveys.” Darrel Rowland of the Columbus Dispatch also questioned the validity of IVR polls. It’s worth noting that SurveyUSA — whose founder, Jay H. Leve, responded to our poll — is both the most prolific polling organization in FiveThirtyEight’s database and one of its top scorers, with an A rating, while using IVR. Most pollsters didn’t back any exclusions. “I will leave you to be the polling police,” Anzalone said.

  3. Of the 20 pollsters who responded, 14 said a simple polling average wouldn’t do better than models such as ours that adjust polls for various factors and incorporate non-polling data. One of the six who backed a simple average said, “Sometimes you can overthink this stuff. Sometimes straight averages are just as good. In many cases you are arguing that one aggregator is better than another because they got the result closer by a couple of tenths of a point. Not a big deal.”

  4. In his explanation of our ratings, Nate Silver wrote, “The differences in poll accuracy aren’t that large.”

  5. Five of 22 pollsters said the ratings come up in conversations with clients or potential clients. “We have been affected with regard to business and our reputation,” said one pollster who received a low grade. “It has been damaging to us outside of our own state.”

Carl Bialik was FiveThirtyEight’s lead writer for news.

Comments