Skip to main content
ABC News
The West is Better Than the East, But That Doesn’t Mean the Spurs Will Beat the Heat

The Miami Heat just racked up their fourth straight conference championship, and they did it with relative ease. This year’s Eastern Conference was one of the weakest in NBA history. Six of the playoff teams in the East didn’t have a good enough record to qualify for the playoffs in the West.

If that makes the Heat sound like merely the best of a weak lot, and little threat to beat the San Antonio Spurs, the West’s standard-bearers, think again. The stats suggest that, come the finals, a weak conference doesn’t mean a weak conference champion. Bizarrely, in recent years it’s meant the opposite.

That’s good news for Miami, because even amid a decade of Western dominance, the East this season stood out for being really, really bad. This was the 68th regular season that the East and West played against each other, and it was the East’s fourth-worst showing. Eastern Conference teams won 37 percent of their games against the West (166 out of 450 overall). And it could have been much worse. The East won less than 30 percent of games through early December. Only in 1948 did the West rule the East by a considerably greater margin.1

bialik-feature-NBAEastWest0605-1

But the regular season, as most NBA fans will concede, doesn’t matter much. And that’s certainly true of the widening continental divide in strength. For the last 20 years the regular-season series between Eastern Conference and Western Conference teams haven’t mattered come the finals.

The West started to dominate the NBA in 2000, just as regular-season dominance began predicting a poor showing in the finals. Of the last 14 postseasons, the better a conference did against the other in the regular season, the worse its champion did in the finals.Basketball Reference. One small piece of good news: The correlation is slightly stronger (R=-0.39) if you include 1999, when the Spurs won the final in five games despite the East’s superior overall record. Go back any further than 1999 and the relationship breaks down and starts to reverse to the more intuitive direction: The better the conference did in the regular season, the better its champion did in the finals.

">2 That’s not to say the Western Conference champ lost — nine out of 14 times it won — but the years that the West most dominated the regular season tended to be the years the Eastern Conference champ won.3

That the head-to-head conference record isn’t predictive shouldn’t be a huge surprise. The contests represent a minority of games played during the regular season and as the conferences have expanded, their teams’ records against each other say less than they used to about any one team’s performance against the other conference.

Even looking just at the interconference record of the finalists doesn’t provide much insight: Since 2000, the difference between the interconference records of the two finalists has had zero correlation with the result of the NBA Finals.4 If anything, the East’s weakness has become a strength for its champion. Because of the league’s unbalanced schedule, each team plays roughly two-thirds of its games within the conference. That’s been an asset for strong Eastern Conference teams that can use those games to beat up on lesser opponents, expend less effort and risk fewer injuries.

What is correlated to winning the league title is the margin between the two finalists’ regular-season records.5 That bodes poorly for Miami this year, which had a regular-season record 10 percentage points lower than San Antonio’s.

But recently two other ways of comparing the conferences have been even more useful in forecasting the NBA Finals: who wins the All-Star Game, and how bad each conference’s worst team is. They’re surprising metrics — so surprising that I suggest you read everything to come with the appropriate amount of caution and skepticism, especially because they’re based on a small sample of just the 14 recent seasons of Western dominance. Each one represents a sliver of NBA reality and each could certainly be a fluke. But there is reason to think they might be meaningful.

The distance between each conference’s champ and its worst team appears to have a strong correlation with what happens in the finals.6 Since 2000, this factor has a correlation coefficient of 0.63 with the East champ’s record in the finals.7 That makes sense: Though each conference’s worst team doesn’t play in the playoffs, it does serve as a sort of proxy of conference strength, and of the relative ease of each finalist’s path to the playoffs.8

But for this year’s finals, that measurement doesn’t tell us much. Miami’s win percentage was .476 points better than that of the worst team in the East, the Milwaukee Bucks. The Spurs, meanwhile, were .451 points better than the Utah Jazz. So, very slight edge to Miami.

The other indicator is even more laughable at first glance. Each winter, the league’s best players divide into two teams and play a defense-free exhibition known as the All-Star Game. Defense was a rarer commodity than ever before this year in New Orleans, where each team scored at least 155 points, more than any prior All-Star team had scored in regulation.

Yet recently, as the All-Star Game has gone, often, so have the finals. The East won the game in 2006 and 2008, and then the Heat and the Celtics, respectively, won the title those years. The West won the All-Star Game in the three other seasons since 2000 that the East won the league title, but each time by five points or fewer. Meanwhile, each time the West has won the All-Star Game by 10 points or more, its champ has won the NBA title while dropping two finals games or fewer. Overall since 2000, the correlation coefficient between the East’s All-Star Game scoring margin and its champ’s finals winning percentage is 0.6. For context, that same number was 0.06 from 1951, the first year with an All-Star Game, to 1999. What we’re seeing is either an aberration or a sea change.

As skeptical as I am, this result may say something about the importance of teams’ very best players. Miami has reached four finals in four years since its three perennial All-Stars — LeBron James, Dwyane Wade and Chris Bosh — took their talents to South Beach. And in February in New Orleans, the trio combined for about 23 percent of the East’s points and minutes in its eight-point victory over the West.

Maybe the All-Star finding and the one about the importance of each conference’s weakest team together suggest that the Big Three had the right idea all along: The secret to success in today’s NBA is for several stars to come together on a single team that mostly gets to beat up much weaker conference opponents until the finals.

Whether the All-Star Game really matters or not, one thing is clear: These days, when it comes to forecasting the finals, that single exhibition has as much to tell us as all those regular-season interconference tests that do count in the standings.

Footnotes

  1. Back then, the NBA was known as the Basketball Association of America. The Providence Steam Rollers, one of four East teams, rarely steam-rolled Western opponents, winning three of 24 interconference games. Incidentally, the BAA’s first two postseasons were the only ones in league history in which East and West teams played each other before the final. Fortunately for this analysis, each of those two seasons ended with finals featuring an East-West matchup.

    I did omit 1950 from all analyses. That season, the NBA’s debut, was the only one in which the league as a whole was divided into three geographical regions; that season the Central produced the league champ, the Minneapolis Lakers.

  2. The correlation coefficient between the Eastern Conference’s regular-season record against the West, and its champion’s record in the final, since 2000 has been -0.35, according to my analysis. Like all analyses here, this is based on data from Basketball Reference. One small piece of good news: The correlation is slightly stronger (R=-0.39) if you include 1999, when the Spurs won the final in five games despite the East’s superior overall record. Go back any further than 1999 and the relationship breaks down and starts to reverse to the more intuitive direction: The better the conference did in the regular season, the better its champion did in the finals.

  3. Four of the last five East champs won in years that were below average for the conference, even by its low standards since the turn of the millennium. One of those four was Detroit in 2004, the conference’s recent low-water mark. Conversely, none of the four best regular seasons the East had against the West since 2000 yielded an NBA title.

  4. More precisely, a correlation coefficient of -0.06. That’s down from 0.52 from 1947 to 1979, and 0.36 from 1980 to 1999.

  5. R~0.5.

  6. More formally, it’s the regular-season record of the East champ, minus that of the East’s team with the worst record; minus the equivalent gap for the West.

  7. That’s up from 0.11 from 1947 to 1999.

  8. Their paths to the finals, though, don’t seem to matter as much — since 2000, there’s been a very slight positive correlation (R=0.16) between number of playoff games played by the East champ vs. the West, and the East’s record in the finals. That’s the opposite direction we’d expect if getting to play fewer playoff games before the finals mattered for finals success, because of weaker competition. The correlation coefficient for 1947 to 1999 is -0.37, which is more intuitive.

Carl Bialik was FiveThirtyEight’s lead writer for news.

Comments