Skip to main content
Menu
It’s Dumb That The All-Star Game ‘Counts,’ But It’s Mostly Harmless

Stocked with a cast of insanely talented young players, Tuesday’s MLB All-Star Game in Cincinnati was a perfectly pleasant affair, and a nice showcase for the current (and future) state of the sport. But it wouldn’t be an All-Star Game if it weren’t also noted that the outcome of a silly exhibition contest continues to be used to determine home-field advantage in the World Series.

That’s pretty dumb, and there’s no shortage of people in the game who’d like to see it changed. (Except, apparently, new MLB Commissioner Rob Manfred.) There are also plenty of alternative suggestions for what should determine World Series home-field instead. But as dumb as it is for the All-Star Game to “count,” what really matters is the effect the rule has had on the 12 Fall Classics since it was put into place.

To quantify the ramifications of the policy, I calculated pre-series win probabilities for all World Series teams from 2003 to 2014 using their regular-season pythagorean records.1 I looked at how much those odds shifted depending on whether the frivolous, All-Star-based home-field rule was used, or two alternatives: the equally arbitrary (but at least consistent) pre-2003 policy of alternating home-field between leagues each season, and a simple rule that bestowed home-field upon the team with the superior regular-season record.

paine-datalab-counts-table

Going from the current format to an old-school approach that alternates home-field by league every year, the pre-series odds would have shifted by an average of +/- 1.4 percentage points each year over the past 12 seasons. Home-field itself would have been different seven times in this alternate universe, though it probably wouldn’t have made much difference to the outcomes of most series. Twice in 12 years (2005 and 2013), the margin between the teams was slim enough that swapping home-field would have changed which team was favored.

The team with home-field in reality won each of those series, so the argument could be made that linking home-field advantage to the All-Star Game swung a pair of championships. But each of those series was also nearly 50-50 regardless of who held home-field, so you’d expect a different set of results in those series (compared with reality) only about 75 percent of the time if they were replayed. In total, if we re-simulated the last 12 World Series in this universe, we’d see about 6.2 different winners over that span, but only 1.7 of them could be directly traced to ditching the All-Star Game-based home-field format. Per decade, that works out to about 1.4 titles exchanging hands because of the change in format.

That number gets smaller if we compare the actual results since 2003 to another hypothetical universe in which home-field is assigned to the team with the better regular-season record. (This seems to be the most popular suggested format change among the reformist crowd.) Under those rules, the favorite would change for only one World Series (2013), and if we replayed the past 12 years numerous times, we’d see only 0.7 different champions as a result of the format change than we saw in reality (which averages to 0.6 per decade). The more that things would change, the more they’d kinda sorta stay the same.

This isn’t to say that MLB shouldn’t look into switching to a more sensical method of determining home-field advantage in the World Series — preferably one that didn’t involve an exhibition game. But the stupidity of the current policy probably outpaces its actual effect, given that the difference between it and a record-based approach is one different (perhaps more deserving) champion every 17 years.

Footnotes

  1. More specifically, I plugged those records into the log5 formula with a home-field advantage of 54 percent — MLB’s long-term average winning percentage for home teams — and, in turn, plugged those numbers into WhoWins.com’s series probability formulae.

Neil Paine is a senior sportswriter for FiveThirtyEight.

Comments