Skip to main content
Menu
Yes, The Infield Shift Works. Probably.

The infield shift is this decade’s defining baseball tactic, the most salient on-field signal that we aren’t in the aughts anymore. But, strange as it sounds, we still don’t know how well — or even whether — the shift actually works.

Our best evidence that the shift is worth doing is, well, that it’s so frequently done. Presumably, front offices wouldn’t advocate shifting — often over the objections of coaches and players — unless they thought it was wise. That’s not nothing; teams are stocked with smart people who have access to better data than we do. But it would be nice not to have to appeal to authority to make a case for the shift. (After all, until recently you could have argued against the shift by pointing out that almost nobody was using it.)

Unfortunately, corroborating evidence is hard to come by. Despite a 1,223 percent increase in the use of shifts since 2011,1 the leaguewide batting average on balls in play has increased, from .294 to .300.2 Even the leaguewide batting average on ground balls is up, from .228 to .242. Granted, the league’s batting average on pulled grounders is down from .199 to .183, but on opposite-field grounders — which can slip through holes that the shift creates — that number is way up, from .255 to .379. The net result is more hits, despite all those wacky infield rearrangements.

It could be that confounding effects — such as batters hitting the ball harder — are camouflaging the shift’s benefits, but that’s impossible to prove. Moreover, not every forward-thinking organization has fully embraced the shift. The Cubs are as stat-savvy as any organization, but they’ve shifted not only less often than 28 other teams in 2016, but also less than their analytically oriented manager, Joe Maddon — who helped popularize the shift while with the Rays — did way back in 2012, before shifts were ubiquitous. Detecting the benefits of the shift wasn’t supposed to be this hard.

Shift information became publicly available for the first time this spring, but it’s not quite comprehensive enough yet to answer our questions about the shift’s effectiveness. Companies that provide shift data3 still indicate whether a shift was on only when the ball was put into play. That’s an important piece of the puzzle, but it’s not all we need to know. It’s likely that a shift alters both the batter’s and pitcher’s approaches in ways that might not even result in a fieldable ball. Therefore, we can’t come to any conclusion about the shift’s overall efficacy without knowing whether it affects strikeout, walk or home run rates — and we can’t figure that out without knowing where the fielders were positioned for every plate appearance, not just a subset.4

But until we can acquire all of that data, let’s try a workaround proposed by Mitchel Lichtman, co-author of the sabermetric strategy manual “The Book.” Since shifts have become much more common over the past several seasons, and some hitters are now shifted against far more frequently than others, Lichtman reasons that those oft-shifted hitters should have suffered more from the overall rise of the shift than those for whom it isn’t (and never really was) a factor. In other words, a natural experiment is happening: One subset of batters has seen an enormous uptick in the number of shifts they face, while another still essentially faces the same defenses as always. If the shift “works,” then the latter group should have fared better5 over the past handful of years than the former.

We can test that theory. First, we obtained a list of shift counts (again, only on balls in play) against all hitters from 2010 to 2015, according to Baseball Info Solutions. Then we separated those hitters into three groups, sorting them into a given tercile based on the percentage of their balls in play for which the shift was on. Finally, we created aging curves6 to illustrate the expected year-to-year change in weighted runs created plus (wRC+) for each group at each age.7 So positive numbers on the y-axis indicate an improvement in performance at a given age, and negative ones indicate a decline.8

arthur-shift-1

Perhaps unsurprisingly, hitters in the high-shift group are much better to begin with than those in the medium- and low-shift groups. Shifted hitters tend overwhelmingly to be left-handed power bats — a pretty good class of player — while the opposite group includes weak hitters that don’t do enough damage against regular defenses for opponents to even bother with a shift. However, our method accounts for this by comparing hitters to themselves over time, and the high-shift curve in the chart above is consistently lower than those of low-shift hitters, meaning the most shifted players fared worse than we’d have expected of the typical player.

The high-shift group also declined earlier — and more steeply — than medium- and low-shift hitters in the Shift Era, all of which suggests the shift has “worked.” Or at least, to some extent: over the period we examined, low-shift players gained 4.4 cumulative points of wRC+, on average, while medium-shift players gained 1.8 points and high-shift players gained only 0.5 points. Of course, we might expect the most extreme of the high-shift hitters to be subject to a more severe penalty; then again, the guy who’s seen the most shifts in baseball9 has also been its best hitter this year. Either way, in aggregate, the rise of the shift does appear to have hurt the group of hitters against whom it’s most often applied.

To make sure our method was measuring an effect of the shift — and not just a different decline profile for players in the high-shift group, who are more likely to have “old player skills” (and thus would be expected to age poorly anyway) — we ran similar aging curves for high- and low-slugging percentage hitters from the 1980s.10 The results show that today’s power hitters aren’t aging any differently relative to non-power hitters than they did during the ’80s, which suggests that the shift effect we found is real (if not exactly spectacular).

So, unlike league-wide BABIP stats, our study offers some qualified support for the shift. And until we’re able to examine fielder locations at the level of every plate appearance or even every pitch — which a wider release of raw data from MLB’s Statcast tracking system might make possible — that’s about the best we can do. As it is, we can’t dismiss the possibility that the shift’s aesthetic impact outstrips its practical implications, but it’s also way too soon to dismiss its benefits, particularly on the basis of incomplete data.11

Footnotes

  1. MLB teams shifted only 0.48 times per game that year, compared with 5.87 so far this season.
  2. According to TruMedia’s data.
  3. Which, incidentally, gives them a stake in the tactic’s success.
  4. Further complicating matters, batters face shifts more often against some teams and defenses than others, which skews simple shift/no-shift performance comparisons, even on balls in play. And while we’re at it, not all alignments that could be described as a “shift” look the same.
  5. After accounting for the effects of aging.
  6. Applying the so-called “delta method” to all plate appearances since 2010.
  7. We also limited our analysis to batters with more than 50 plate appearances in each season and only examined the part of the aging curve between the ages of 20 and 34, to mitigate concerns about sample size and survivorship bias.
  8. These curves generally follow the shape of other aging studies, although they are shifted upward.
  9. According to BIS.
  10. Bypassing baseball’s statistical Bermuda Triangle, the PED Era.
  11. In conclusion, yes: the answer is steroids, somehow. (No need to @ us.)

Rob Arthur is FiveThirtyEight’s baseball columnist and also writes about crime.

Ben Lindbergh is a former staff writer at FiveThirtyEight.

Comments