The competitive phase of the 2020 presidential primaries is over — which means we’ve updated FiveThirtyEight’s pollster ratings. These ratings cover this year’s presidential primaries, the 2019 gubernatorial elections and the occasional straggler poll we only just discovered from a past election. They include polls conducted in the final 21 days1 before every presidential, U.S. Senate, U.S. House and gubernatorial general election (including special elections), as well as every presidential primary,2 since 1998. We encourage you to check out the new ratings, especially when a new poll comes out and you want to gauge its reliability.
So far, it hasn’t been a great year for pollsters. The 2020 presidential primary polls had a weighted average3 error — i.e., the absolute difference between a poll’s margin (between the top two candidates) and the actual vote share margin4 — of 10.2 percentage points.5 That’s roughly tied with the 2016 presidential primaries for the biggest error in primary polling this century.
But we don’t blame pollsters too much for this: They have some good excuses because the 2020 Democratic primary race changed so quickly. In the span of a week (from roughly Feb. 25 to Super Tuesday), former Vice President Joe Biden dramatically reversed his electoral fortunes, and surveys just weren’t able to keep up with how fast the mood of the electorate was changing. We can see that by breaking down the error of 2020 primary polls by election date:
- Polls of the contests on Super Tuesday had a weighted average error of 12.8 points, with 60 percent of them conducted mostly before Biden’s big South Carolina win and subsequent endorsements by onetime 2020 presidential contenders former South Bend, Indiana, Mayor Pete Buttigieg and Sen. Amy Klobuchar.
- South Carolina polls had a weighted average error of 17.2 points (!), and 75 percent of them were conducted mostly before Rep. Jim Clyburn’s endorsement of Biden.
- Polls of all other contests — Iowa, New Hampshire, Nevada and every post-Super Tuesday state — had a weighted average error of 7.1 points, which is quite good by historical standards for primary polls.
In addition, some pollsters fared better in the 2020 primaries than others. We’d encourage you not to read too much into a pollster’s performance in the 2020 primaries, as it typically takes a larger sample size to ascertain a pollster’s true accuracy. But if there was a “winner” for the 2020 primaries, it was Monmouth University, whose average error of 7.5 points was the lowest among firms that released five or more primary polls. Probably not by coincidence, Monmouth also has the highest FiveThirtyEight pollster rating overall — a sterling A+.
In general, pollsters that use the time-honored methodology of interviewing respondents live over the phone are more reliable than those that use alternative platforms like the internet, and that was mostly true in the 2020 primaries too. Suffolk University, another live-caller pollster, also performed pretty well (an average error of 8.0), although Marist College had an off year (13.3). Among online pollsters, YouGov — whose online methodology is more proven than most — excelled with a 7.6-point error, almost matching Monmouth’s accuracy. The pollster with the highest average error (at least among those with five or more polls to analyze) was Change Research, at 16.1 points.
|Pollster||Methodology||No. of Polls||Average Error|
|Data for Progress||Online/Text||29||9.3|
|Point Blank Political||Online/Text||6||10.7|
|University of Massachusetts Lowell||Online||6||14.0|
Finally, as is our custom when updating the pollster ratings, let’s take a look at the accuracy of polls as a whole through three different lenses — error, “calling” elections correctly and statistical bias — each with an accompanying heat map.6
The first lens is polling error — a.k.a. the same metric we’ve been using so far in this article. Here’s the weighted average error of polls for each election cycle since 1998, broken down by office. We already mentioned how polls of the 2020 primaries were not all that accurate historically speaking. But the limited polls we have for governor and U.S. House races this cycle have been pretty accurate so far. They had weighted average errors of 4.9 and 6.0, respectively, which is perfectly normal for these types of elections, although the sample size is still quite small.
|Cycle||Governor||U.S. Senate||U.S. House||General||Primary||Combined|
Quantifying polling error is arguably the best way to think about the accuracy of polls, but there are other lenses too. But say all you care about is whether polls “called” the election correctly — i.e., how often the candidate who led a poll ended up winning the election.7 We’ve got a heat map for that too (although this isn’t our preferred method, as it’s a bit simplistic). Overall, since 1998, polls have picked the winner 79 percent of the time.8 And by this measure, the accuracy of 2020 primary polls clocked in at exactly average.
|Cycle||Governor||U.S. Senate||U.S. House||General||Primary||Combined|
Frankly, though, this isn’t a great way to think about polls. Say a poll had the Republican ahead by 1 point but the Democrat ended up winning the election by 1 point — that’s a pretty accurate result even though the winner was incorrectly identified. On the other hand, if the Republican ended up winning by 20 points, the poll did correctly identify the winner — but the absolute error was quite large.
Additionally, polls of close elections unsurprisingly make the wrong call much more frequently than races where there is no doubt which candidate is going to win. In both the 2020 primaries and overall, polls showing a blowout (i.e., the leader led by 20 points or more) picked the correct winner almost all the time, but they were right only about half the time when they showed a lead smaller than 3 points.
|% of Polls Picking Winner|
|Margin||2020 Primaries||All Races Since 1998|
This is why, when a poll shows a close race, your takeaway shouldn’t be, “This candidate leads by 1 point!” but rather, “This race is a toss-up.” Polls’ true utility isn’t in telling us who will win, but rather in roughly how close a race is — and, therefore, how confident we should be in the outcome.
The third and final lens we’ll use is polls’ statistical bias. Statistical bias is different from error in that it tells us in which direction the error ran — i.e., did the polls consistently under- or overrate a specific political party? Not much has changed in this final table since the last time we published it, because we exclude presidential primaries from calculations of statistical bias (since all primary candidates belong to the same party), but we think it’s worth reemphasizing its findings as we enter the 2020 general election.
|Cycle||Governor||U.S. Senate||U.S. House||President||Combined|
Those findings: Over the long term, there is no meaningful partisan statistical bias in polling. All the polls in our data set combine for a weighted average statistical bias of 0.3 points toward Democrats. Individual election cycles can have more significant biases — and, importantly, it usually runs in the same direction for every office — but there is no pattern from year to year. In other words, just because polls overestimated Democrats in 2016 does not mean they will do the same in 2020. It’s good to be aware of the potential for polling error heading into the election, but that error could benefit either party. In fact, we’ve observed that preelection attempts to guess which way the polling error will run seem to have an uncanny knack for being wrong — which could be a coincidence or could reflect very real overcompensation.
So despite a rocky primary season, we recommend that you trust the polls in 2020. Of course, “trust the polls” doesn’t mean trust all the polls; that’s why we have our pollster ratings. In addition to our handy letter grades, that page contains each pollster’s average error, statistical bias and the share of races it called correctly, plus details on whether it adheres to methodological best practices and a lot more. You can also download our entire pollster ratings data set, including all the polls that went into the tables in this article, to investigate further on your own. (Wondering how much more accurate live-caller polls are than online ones? Or which state’s polls are the most error-prone? Wonder no more.)
For a detailed methodology of the pollster ratings, check out this 2014 article; we made a few tweaks in 2016 and 2019, such as giving a “slashed” letter grade (e.g., “A/B”) to pollsters with a smaller body of work. There are no methodological changes this year, except we do have a bit of housekeeping that probably only pollsters will be interested in: Starting with our next pollster ratings update (after the 2020 elections), we will no longer give active pollsters a ratings boost for once belonging to the National Council on Public Polls (a now-defunct polling consortium whose members were committed to methodological transparency). Active pollsters will need to participate in the American Association for Public Opinion Research’s Transparency Initiative or contribute to the Roper Center for Public Opinion Research archive to get credit in the “NCPP/AAPOR/Roper” column, which also determines which pollsters we consider “gold standard.”9 As always, if anyone has any questions about any aspect of the pollster ratings, you can always reach us at firstname.lastname@example.org.