From Tampa Bay’s Kevin Kiermaier to San Francisco’s Brandon Crawford, it was hard to tell this year’s list of Gold Glove winners, announced Tuesday night, from a list of players with the best advanced defensive metrics. That’s no coincidence: Since 2013, Rawlings, the mitt-maker that annually hands out the Gold Glove hardware, has incorporated a statistical component known as the SABRSociety for American Baseball Research.">1 Defensive Index (SDI), giving it at least 25 percent weight in the voting. (The rest of the vote belongs to Major League Baseball managers and coaches.)2 But the impact of analytic tools is probably undersold by that number. Instead, the case can be made that the advanced stats have almost completely taken over the Gold Glove competition.
You can see this effect in how much more closely recent Gold Glove winners have matched the selections that would have been made using only defensive metrics:fielding runs above average, adjusted such that the average MLB player (across all positions) has a value of 0 runs saved. The precision of Baseball-Reference.com’s metric — which uses defensive runs saved for seasons since 2003 and Total Zone for years before that — has changed over time. In recent years, it uses metrics that correspond very closely with those that make up SDI.">3
Because MLB has expanded (offering more starting slots at a given position, and therefore the opportunity for more variance relative to average) and the quality of defensive metrics has improved (allowing metric creators to be more confident in handing out highly positive ratings), the average defensive quality of an “All-Defense” team selected purely using metrics has gradually increased since 1958.4 But the average quality of actual Gold Glove winners’ fielding had stayed relatively flat for over 50 years — right up until the introduction of the SDI.
The gap between the real Gold Glove winners and what we’ve defined as the sabermetric ideal reached an all-time high of 14 runs in 2005. That year, voters infamously gave Derek Jeter a Gold Glove for what was one of the worst defensive seasons ever at shortstop according to the numbers. Defensive metrics were improving all the time, but the voters didn’t appear to be paying attention.
The tide turned, however, with the adoption of the SDI in 2013. Immediately upon its inclusion in the voting process, the average statistical quality of a Gold Glove winner skyrocketed, from 10 runs below the sabermetric ideal in 2012 to half that a year later. Obviously, this is a bit of a circular finding: We’re judging Gold Glove winners against a statistical standard determined by one of the same metrics that goes into the SDI itself. But the leap between the pre- and post-SDI eras is still striking.
So striking, in fact, that it even goes beyond what would be expected from the direct influence SDI has on Gold Glove voting by dictating 25 percent of the vote.
“We think it’s influenced the managers’ and coaches’ voting,” Vince Gennaro, SABR’s president and a member of the SDI committee, said about SDI in a telephone interview Tuesday. On top of the SDI numbers’ algorithmic role in the voting process, Gennaro believes they have had a pronounced effect in combating incumbency bias and other reputation-based flaws in the human side of the voting. In other words, because they’re so widely available (they’re even listed on the ballots given to Gold Glove voters), the advanced metrics have also influenced the other 75 percent of the vote they don’t directly control.
“[Say] you’ve got a guy who’s not a perennial Gold Glove guy, but he really caught your eye this year,” Gennaro said. “Then you see he had 17 runs saved, versus a guy who won it last year at 7. I think it could be very much a validating thing, and it might tip you to make that vote.”
Because it essentially involves measuring players against the plays they didn’t make, defense has always been one of the toughest areas of baseball to evaluate statistically. And the absence of detailed defensive data in the past might have caused voters to err on the side of a reputation that was no longer valid (or never was deserved). But now, advanced metrics provide evidence to either support or tear down commonly held beliefs about a player’s defensive prowess, giving them a large amount of sway over both the human and computerized aspects of the Gold Glove process.
This isn’t to say that every Gold Glove now conforms to the advanced metrics. For instance, Kansas City Royals teammates Eric Hosmer and Salvador Perez won this year despite ranking sixth and seventh at their respective positions in SDI. But aside from Hosmer and Perez, every other Gold Glover ranked in the top three in SDI at his position, and 10 of the 18 winners ranked first.
|LEAGUE||POSITION||GOLD GLOVE WINNER||SDI RANK||SDI LEADER (IF DIFFERENT)|
|AL||C||Salvador Perez||7||Caleb Joseph|
|AL||1B||Eric Hosmer||6||Mike Napoli|
|AL||2B||Jose Altuve||3||Ian Kinsler|
|NL||C||Yadier Molina||3||Buster Posey|
|NL||1B||Paul Goldschmidt||2||Brandon Belt|
|NL||2B||Dee Gordon||3||Danny Espinosa|
|NL||LF||Starling Marte||2||Christian Yelich|
|NL||CF||A.J. Pollock||3||Odubel Herrera|
Likewise, it isn’t completely clear that a wholesale metric takeover of the Gold Gloves would be a good thing. While we can measure whether Gold Gloves are getting closer to the sabermetric ideal, it will take further research to see whether that development means having a Gold Glover in the field leads to his team playing better defense.
But since the introduction of the SDI, the Gold Glove process has undeniably become more quantitative. And that’s a pretty big shift for an award that used to be as allergic to meaningful statistics as any in the game.