Further polling since the Republican National Convention has tended to confirm our impressions from earlier this week: Donald Trump has almost certainly gotten a convention bounce, and has moved into an extremely close race with Hillary Clinton. But Trump’s convention bounce is not all that large. You can find polls showing almost no bounce for Trump, and others showing gains in the mid-to-high single digits. Those disagreements are pretty normal and, overall, the polls suggest a net gain of 3 to 4 percentage points for Trump. That would be right in line with the average bounce in conventions since 2004, although it is toward the small side by historical standards.
Trump’s position in our polls-plus forecast, which adjusts for convention bounces, is almost unchanged over the past week; the model continues to give him about a 40 percent chance of winning the election, meaning that Clinton has a 60 percent chance.
Without adjusting for the convention bounce, however, the election is a dead heat. Our polls-only forecast, which doesn’t account for the convention bounce, gives Clinton just a 53 percent chance of winning, and our now-cast — which is more aggressive than the polls-only forecast and estimates what would happen in a hypothetical election held today — has Trump as a 55 percent favorite.
But I want to focus on some relatively technical subject matter today, apart from Trump’s convention bounce. FiveThirtyEight’s isn’t the only election forecast out there. Most of the others give Clinton a better chance than we do — some of them give her as high as an 80 percent chance, in fact, despite her recent slide in the polling. Why are our models more pessimistic about Clinton’s chances?
I’ve noticed that, when discussing differences between our forecasts and others, people tend to focus on which polls are included in the models or how the polls are weighted. The truth is, that stuff usually doesn’t matter very much. There are other things that make a much bigger difference.
One relatively important factor, for instance, is whether you use the version of polls with third-party candidates Gary Johnson and Jill Stein included, or the two-way matchup between Clinton and Trump instead. Recently, polls that included Johnson and Stein as options have been a percentage point or so worse for Clinton, on average. FiveThirtyEight’s models use the version of polls with third-party candidates when they have the choice, which slightly helps Trump.
Another question is how the models account for uncertainty in the forecast. For instance, if Clinton leads by 4 percentage points in a given state, how does that translate into her probability of winning it on Nov. 8? And given various probabilities of her winning each of the 50 states and the District of Columbia, how does that translate into her probability of winning the Electoral College? This is tricky stuff, and we’ll save the detail for another day. For now, I’ll just say that it’s a mistake to assume that the error in each state is independent from the others: If both Ohio and Pennsylvania are tossups on Election Day, Clinton will probably either lose both or win both, instead of splitting them. This means that a narrow lead in the Electoral College is not as safe as it might seem.
But for the time being, most of the differences are caused by how quickly the models adjust to new polling data. Over the course of July, Clinton has steadily declined in the polls. How aggressive should a model be in accounting for that shift?
A lot of election models, including FiveThirtyEight’s, use a variation of loess regression as part of their process, a technique for drawing trend lines through a series of data. Loess regression is a good tool, but one problem is that it can draw rather different-looking trend lines through the same data, depending on something called the bandwidth or the smoothing parameter. Below, for instance, you’ll find two sets of loess curves based on all national polls since March 1. (Note: This is not the technique the FiveThirtyEight models use — for better or worse, our process is a lot more involved — but this will give you a taste of some of the challenges loess regression can present.)
The more aggressive trend line shows several peaks and valleys for Clinton. She rises in the polls in March and April, falls in May after Trump wraps up the Republican nomination, regains a significant lead in June, and then sees her numbers tumble in July. The more conservative trend line instead shows a slow and relatively steady decline for Clinton. And the two trend lines come to rather different conclusions about where the race stands right now: The conservative one still has Clinton ahead by about 2 percentage points, while the aggressive trend line has Trump up by 1 point.
So which one is correct? That’s another tricky question. If you’re using loess regression for descriptive purposes — to illustrate how the polls have moved in the past — the more aggressive trend line is clearly better. It does a much better job of capturing movement in the polls — we had more than enough data to know that Clinton moved up in the polls from May to June, for instance. But if you’re using loess to make predictions — to anticipate where the polls are going to go next — there’s an argument for using a more conservative setting. That’s because short-term movement in the polls often reverses itself — a candidate gets a bad news cycle, and she drops a couple of percentage points, but she recovers them once the news moves along to another subject. The convention bounce is one example of this, in fact, since the bounces often reverse themselves after a few weeks.
We spent a lot of time on this issue when originally building our model in 2008 and then when revising it in 2012 and earlier this year, trying to figure out how aggressive these loess curves should be in order to maximize predictive accuracy. The short answer is that you want an aggressive setting — very aggressive, in fact — late in the campaign, and a more conservative one earlier on.1 Still, it’s certainly also possible to be too conservative, which could mean missing the considerable shift away from Clinton that began a few weeks ago in the polls, well ahead of the conventions.
Another tricky question is how to reconcile state polls with national polls. For example, there have been no polls of Pennsylvania over the past two weeks, during which time Clinton’s lead has evaporated in national polls (and often also in polls of other states, where we’ve gotten them). The FiveThirtyEight model uses what we call a trend-line adjustment to adjust those those old polls to catch up to the current trend. That’s why our polls-only forecast shows Pennsylvania as a tossup even though Trump has only led one poll there all year. Those older polls came from a time when Clinton led by 5 or 6 or 7 percentage points nationally, and they generally showed her up by about the same margin in Pennsylvania. Now that the national race is almost tied, it’s probably safe to assume that Pennsylvania is very close also. Some of the competing models don’t do this, and we think that’s probably a mistake, since it means their state-by-state forecasts will lag a few weeks behind, even when it’s obvious there’s been a big shift in the race.
Bottom line: Although there are other factors that matter around the margin, our models show better numbers for Trump mostly because they’re more aggressive about detecting trends in polling data. For the past couple of weeks — and this started before the conventions, so it’s not just a convention bounce — there’s been a strong trend away from Clinton and toward Trump. Although there’s always the risk of overreaction, this time our models were ahead of the curve in understanding the shift. But if Clinton rebounds next month, our models may be among the first to show that as well.