Skip to main content
Menu
We’ve Updated Our Pollster Ratings Ahead Of The 2020 General Election

The competitive phase of the 2020 presidential primaries is over — which means we’ve updated FiveThirtyEight’s pollster ratings. These ratings cover this year’s presidential primaries, the 2019 gubernatorial elections and the occasional straggler poll we only just discovered from a past election. They include polls conducted in the final 21 days1 before every presidential, U.S. Senate, U.S. House and gubernatorial general election (including special elections), as well as every presidential primary,2 since 1998. We encourage you to check out the new ratings, especially when a new poll comes out and you want to gauge its reliability.

So far, it hasn’t been a great year for pollsters. The 2020 presidential primary polls had a weighted average3 error — i.e., the absolute difference between a poll’s margin (between the top two candidates) and the actual vote share margin4 — of 10.2 percentage points.5 That’s roughly tied with the 2016 presidential primaries for the biggest error in primary polling this century.

But we don’t blame pollsters too much for this: They have some good excuses because the 2020 Democratic primary race changed so quickly. In the span of a week (from roughly Feb. 25 to Super Tuesday), former Vice President Joe Biden dramatically reversed his electoral fortunes, and surveys just weren’t able to keep up with how fast the mood of the electorate was changing. We can see that by breaking down the error of 2020 primary polls by election date:

  • Polls of the contests on Super Tuesday had a weighted average error of 12.8 points, with 60 percent of them conducted mostly before Biden’s big South Carolina win and subsequent endorsements by onetime 2020 presidential contenders former South Bend, Indiana, Mayor Pete Buttigieg and Sen. Amy Klobuchar.
  • South Carolina polls had a weighted average error of 17.2 points (!), and 75 percent of them were conducted mostly before Rep. Jim Clyburn’s endorsement of Biden.
  • Polls of all other contests — Iowa, New Hampshire, Nevada and every post-Super Tuesday state — had a weighted average error of 7.1 points, which is quite good by historical standards for primary polls.
One week torpedoed the 2020 primary polls

Weighted average error of polls in the final 21 days* before each contest, among polls in FiveThirtyEight’s pollster ratings database

Contest Error
Iowa 5.3
New Hampshire 4.9
Nevada 8.8
South Carolina 17.2
Super Tuesday 12.8
March 10 9.7
March 17 6.6

Averages are weighted by the square root of the number of polls that a particular pollster conducted for that particular election date. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.

*Excluding New Hampshire primary polls taken before the Iowa caucuses, other states’ primary polls taken before the New Hampshire primary, and primary polls whose leader or runner-up dropped out before that primary was held.

In addition, some pollsters fared better in the 2020 primaries than others. We’d encourage you not to read too much into a pollster’s performance in the 2020 primaries, as it typically takes a larger sample size to ascertain a pollster’s true accuracy. But if there was a “winner” for the 2020 primaries, it was Monmouth University, whose average error of 7.5 points was the lowest among firms that released five or more primary polls. Probably not by coincidence, Monmouth also has the highest FiveThirtyEight pollster rating overall — a sterling A+.

In general, pollsters that use the time-honored methodology of interviewing respondents live over the phone are more reliable than those that use alternative platforms like the internet, and that was mostly true in the 2020 primaries too. Suffolk University, another live-caller pollster, also performed pretty well (an average error of 8.0), although Marist College had an off year (13.3). Among online pollsters, YouGov — whose online methodology is more proven than most — excelled with a 7.6-point error, almost matching Monmouth’s accuracy. The pollster with the highest average error (at least among those with five or more polls to analyze) was Change Research, at 16.1 points.

The best and worst pollsters of the 2020 primaries

Average error of polls in the final 21 days* before 2020 presidential primaries and caucuses, for pollsters that conducted at least five polls, among polls in FiveThirtyEight’s pollster ratings database

Pollster Methodology No. of Polls Average Error
Monmouth University Live 7 7.5
YouGov Online 9 7.6
Suffolk University Live 6 8.0
Emerson College IVR/Online/Text 13 8.5
AtlasIntel Online 8 8.8
Data for Progress Online/Text 29 9.3
Point Blank Political Online/Text 6 10.7
Swayable Online 22 12.0
Marist College Live 6 13.3
University of Massachusetts Lowell Online 6 14.0
Change Research Online 7 16.1

*Excluding New Hampshire primary polls taken before the Iowa caucuses, other states’ primary polls taken before the New Hampshire primary, and primary polls whose leader or runner-up dropped out before that primary was held.

Finally, as is our custom when updating the pollster ratings, let’s take a look at the accuracy of polls as a whole through three different lenses — error, “calling” elections correctly and statistical bias — each with an accompanying heat map.6

The first lens is polling error — a.k.a. the same metric we’ve been using so far in this article. Here’s the weighted average error of polls for each election cycle since 1998, broken down by office. We already mentioned how polls of the 2020 primaries were not all that accurate historically speaking. But the limited polls we have for governor and U.S. House races this cycle have been pretty accurate so far. They had weighted average errors of 4.9 and 6.0, respectively, which is perfectly normal for these types of elections, although the sample size is still quite small.

2020 primary polls misfired, but the polls are still all right

Weighted average error of polls in the final 21 days before elections, among polls in FiveThirtyEight’s pollster ratings database

President
Cycle Governor U.S. Senate U.S. House General Primary Combined
1998 8.2 7.5 7.1 7.7
1999-2000 4.8 6.1 4.4 4.4 7.7 5.6
2001-02 5.2 4.8 5.5 5.2
2003-04 4.8 5.0 5.5 3.2 6.9 4.6
2005-06 5.0 4.2 6.2 5.2
2007-08 4.2 4.7 5.7 3.5 7.6 5.4
2009-10 4.8 4.8 6.7 5.7
2011-12 4.9 4.7 5.3 3.7 9.0 5.3
2013-14 4.6 5.4 6.7 5.4
2015-16 5.4 4.9 5.8 4.9 10.3 6.8
2017-18 5.1 4.3 5.0 4.9
2019-20* 4.9 TBD 6.0 TBD 10.2 TBD
All years 5.3 5.2 6.1 4.0 9.3 6.0

Averages are weighted by the square root of the number of polls that a particular pollster conducted for that particular type of election in that particular cycle. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.

*The gubernatorial and U.S. House figures are preliminary and based on small sample sizes. Because there are no polls of Senate or presidential general elections to incorporate, no combined score is given.

Quantifying polling error is arguably the best way to think about the accuracy of polls, but there are other lenses too. But say all you care about is whether polls “called” the election correctly — i.e., how often the candidate who led a poll ended up winning the election.7 We’ve got a heat map for that too (although this isn’t our preferred method, as it’s a bit simplistic). Overall, since 1998, polls have picked the winner 79 percent of the time.8 And by this measure, the accuracy of 2020 primary polls clocked in at exactly average.

Polls pick the winner 79 percent of the time

Weighted average share of polls that correctly identified the winner in the final 21 days before elections, among polls in FiveThirtyEight’s pollster ratings database

President
Cycle Governor U.S. Senate U.S. House General Primary Combined
1998 86% 87% 49% 76%
1999-2000 81 84 54 67% 95% 76
2001-02 87 82 73 81
2003-04 76 82 64 79 94 79
2005-06 89 92 73 84
2007-08 95 95 84 94 79 88
2009-10 85 85 77 82
2011-12 91 87 70 82 62 77
2013-14 80 75 76 77
2015-16 68 78 58 71 86 77
2017-18 74 74 79 75
2019-20* 88 TBD 41 TBD 79 TBD
All years 82 83 72 79 81 79

Pollsters get half-credit if they show a tie for the lead and one of the leading candidates wins. Averages are weighted by the square root of the number of polls that a particular pollster conducted for that particular type of election in that particular cycle. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.

*The gubernatorial and U.S. House figures are preliminary and based on small sample sizes. Because there are no polls of Senate or presidential general elections to incorporate, no combined score is given.

Frankly, though, this isn’t a great way to think about polls. Say a poll had the Republican ahead by 1 point but the Democrat ended up winning the election by 1 point — that’s a pretty accurate result even though the winner was incorrectly identified. On the other hand, if the Republican ended up winning by 20 points, the poll did correctly identify the winner — but the absolute error was quite large.

Additionally, polls of close elections unsurprisingly make the wrong call much more frequently than races where there is no doubt which candidate is going to win. In both the 2020 primaries and overall, polls showing a blowout (i.e., the leader led by 20 points or more) picked the correct winner almost all the time, but they were right only about half the time when they showed a lead smaller than 3 points.

Polls often miss close races

Share of polls that correctly identified the winner in the final 21 days before elections, by how close the poll showed the race

% of Polls Picking Winner
Margin 2020 Primaries All Races Since 1998
<3 pts 46% 57%
3-6 72 70
6-10 79 84
10-15 75 93
15-20 82 97
≥20 98 99

Pollsters get half-credit if they show a tie for the lead and one of the leading candidates wins. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.

This is why, when a poll shows a close race, your takeaway shouldn’t be, “This candidate leads by 1 point!” but rather, “This race is a toss-up.” Polls’ true utility isn’t in telling us who will win, but rather in roughly how close a race is — and, therefore, how confident we should be in the outcome.

The third and final lens we’ll use is polls’ statistical bias. Statistical bias is different from error in that it tells us in which direction the error ran — i.e., did the polls consistently under- or overrate a specific political party? Not much has changed in this final table since the last time we published it, because we exclude presidential primaries from calculations of statistical bias (since all primary candidates belong to the same party), but we think it’s worth reemphasizing its findings as we enter the 2020 general election.

Polling bias is small and unpredictable

Weighted average statistical bias of polls in the final 21 days before general elections, among polls in FiveThirtyEight’s pollster ratings database

Cycle Governor U.S. Senate U.S. House President Combined
1998 R+5.6 R+4.5 R+0.8 R+3.8
1999-2000 R+0.4 R+2.7 D+1.2 R+2.4 R+1.9
2001-02 D+3.0 D+1.3 D+1.5 D+2.2
2003-04 D+1.7 D+1.1 D+2.7 D+1.1 D+1.5
2005-06 D+0.2 R+1.3 D+0.7 D+0.0
2007-08 R+0.1 D+0.4 D+1.1 D+1.0 D+0.8
2009-10 R+0.1 R+0.7 D+1.7 D+0.6
2011-12 R+1.6 R+3.1 R+2.9 R+2.5 R+2.8
2013-14 D+2.3 D+2.6 D+3.8 D+2.7
2015-16 D+3.3 D+2.7 D+4.1 D+3.3 D+3.0
2017-18 R+0.9 D+0.1 R+0.6 R+0.5
2019-20* D+2.9 TBD D+6.0 TBD TBD
All years D+0.5 D+0.1 D+0.8 D+0.3 D+0.3

Bias is calculated only for elections where the top two finishers were a Republican and a Democrat. Therefore, it is not calculated for presidential primaries. Averages are weighted by the square root of the number of polls that a particular pollster conducted for that particular type of election in that particular cycle. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.

*The gubernatorial and U.S. House figures are preliminary and based on small sample sizes. Because there are no polls of Senate or presidential races to incorporate, no combined score is given.

Those findings: Over the long term, there is no meaningful partisan statistical bias in polling. All the polls in our data set combine for a weighted average statistical bias of 0.3 points toward Democrats. Individual election cycles can have more significant biases — and, importantly, it usually runs in the same direction for every office — but there is no pattern from year to year. In other words, just because polls overestimated Democrats in 2016 does not mean they will do the same in 2020. It’s good to be aware of the potential for polling error heading into the election, but that error could benefit either party. In fact, we’ve observed that preelection attempts to guess which way the polling error will run seem to have an uncanny knack for being wrong — which could be a coincidence or could reflect very real overcompensation.

So despite a rocky primary season, we recommend that you trust the polls in 2020. Of course, “trust the polls” doesn’t mean trust all the polls; that’s why we have our pollster ratings. In addition to our handy letter grades, that page contains each pollster’s average error, statistical bias and the share of races it called correctly, plus details on whether it adheres to methodological best practices and a lot more. You can also download our entire pollster ratings data set, including all the polls that went into the tables in this article, to investigate further on your own. (Wondering how much more accurate live-caller polls are than online ones? Or which state’s polls are the most error-prone? Wonder no more.)

For a detailed methodology of the pollster ratings, check out this 2014 article; we made a few tweaks in 2016 and 2019, such as giving a “slashed” letter grade (e.g., “A/B”) to pollsters with a smaller body of work. There are no methodological changes this year, except we do have a bit of housekeeping that probably only pollsters will be interested in: Starting with our next pollster ratings update (after the 2020 elections), we will no longer give active pollsters a ratings boost for once belonging to the National Council on Public Polls (a now-defunct polling consortium whose members were committed to methodological transparency). Active pollsters will need to participate in the American Association for Public Opinion Research’s Transparency Initiative or contribute to the Roper Center for Public Opinion Research archive to get credit in the “NCPP/AAPOR/Roper” column, which also determines which pollsters we consider “gold standard.”9 As always, if anyone has any questions about any aspect of the pollster ratings, you can always reach us at polls@fivethirtyeight.com.


FiveThirtyEight Politics Podcast: Does Trump Really Want A Fight With Obama?

Footnotes

  1. Based on the poll’s median date.

  2. For presidential primaries, we excluded from our analysis New Hampshire primary polls taken before the Iowa caucuses, other states’ primary polls taken before the New Hampshire primary, and primary polls whose leader or runner-up dropped out before that primary was held.

  3. To avoid giving prolific pollsters too much influence over the average, it is weighted by the number of polls each pollster conducted. Specifically, the weights are based on the square root of the number of polls that a firm conducted. For instance, a pollster that conducted 16 polls of a given type of election in a given cycle would be weighted four times as heavily as a pollster that conducted just one poll.

  4. For example, if a poll gave the Republican candidate a lead of 3 percentage points but the Democrat won the election by 2 points, that poll had a 5-point error.

  5. Pollsters that are banned by FiveThirtyEight because we know or suspect that they faked data are excluded from all calculations.

  6. These heat maps use the same rules as enumerated in footnotes 1-5 above, including weighting pollsters by the number of polls they conducted of that particular type of election in that particular cycle, and excluding polls we know or believe are fake.

  7. We give pollsters half-credit on this score if they show a tie race and one of the leading candidates wins.

  8. Again, weighting by the number of polls conducted by each pollster.

  9. To meet our gold standard, pollsters must use live people (as opposed to robocalls) to conduct interviews over the phone, call cell phones as well as landlines and participate in AAPOR, Roper or NCPP.

Nathaniel Rakich is FiveThirtyEight’s elections analyst.

Comments