/cdn.vox-cdn.com/uploads/chorus_image/image/22808519/20131027_jla_sr6_084.0.jpg)
The Gold Gloves kind of have a reputation in sabermetric / analytics circles. It's not a good one. There's a reason why things like the Fielding Bible Awards exist, and that's because the winners of Gold Gloves in the past haven't always tracked with the best fielders in the minds of many forward-thinking quantitative analysts.
Well, last year Rawlings announced that they would be using input from the Society for American Baseball Research (SABR) to help decide the winners of the Gold Glove awards going forward. At the time, there was quite a bit of healthy skepticism about the who, what and how of this new methodology, but then the 2013 Gold Gloves came out, and most sabermetrically-minded folks had precious little to complain about.*
* - Little to complain about is relative, especially on the internet. And we'll get to this later. Promise.
At any rate, SABR and Rawlings invented not a true metric, but an index: the SABR Defensive Index (SDI). SDI was developed by the SABR Defensive Committee (good, smart folks), and takes into account most major defensive metrics and, well, you should probably just read a lot about it here. More importantly, this page also shows us how qualified players scored on the SDI. This is good. Transparency is very, very good. Take a minute and see where the actual winners and the SDI leaders intersect:
Gold Glove Winner | SDI Ranking | SDI |
---|---|---|
R.A. Dickey | 1 out of 39 | 4.9 |
Salvador Perez | 1 out of 15 | 7.6 |
Eric Hosmer | 3 out of 12 | 6.1 |
Dustin Pedroia | 1 out of 10 | 11.6 |
Manny Machado | 1 out of 11 | 32.4 |
J.J. Hardy | 6 out of 11 | 1.6 |
Alex Gordon | 1 out of 8 | 10.5 |
Adam Jones | 13 out of 13 | -9.8 |
Shane Victorino | 1 out of 11 | 19 |
Adam Wainwright | 9 out of 44 | 2.4 |
Yadier Molina | 2 out of 14 | 9.4 |
Paul Goldschmidt | 2 out of 10 | 10.7 |
Brandon Phillips | 1 out of 11 | 8.1 |
Nolan Arenado | 1 out of 13 | 21.5 |
Andrelton Simmons | 1 out of 11 | 29.3 |
Carlos Gonzalez | 4 out of 8 | 1.7 |
Carlos Gomez | 1 out of 11 | 23.1 |
Gerardo Parra | 1 out of 14 | 18.9 |
Crazy, right? 11 of the 18 award winners were first place in the SDI rankings. Two more were second-ranked players, and there's also a third and a fourth. By and large, the SDI numbers -- which are agreed to be better than nothing, I think -- aligned with the folks that actually won. In fact, the SDI numbers accounted for 30 votes for the award, around to 25% of the decision.
I think this is progress.
But I have to say that I think it is progress, not that I'm sure it's progress. Defensive statistics, advanced or otherwise, are still very much a work in progress. UZR and DRS are great at telling us a little about certain parts of defense -- specifically range of different players -- but they vary greatly from year to year and rely on something called "stringer bias" and are necessarily small samples and don't do catchers particularly well and, well, there's a lot of factors that make them a little bit less than an exact science.
Even still, having something that's close to an objective measure, an outside measure, as part of the decision-making process is probably a good thing. Not only are the defensive stats that make up the SDI much, much better than nothing, it provides a stark difference in perspective from those who typically vote on Gold Gloves. Namely, the managers and coaches for each league. Not that managers and coaches aren't experts on fielding, but they certainly may not be experts on all the players in each league. Reputation is king, not talent.
Judging fielding ability is super, super hard. So when it comes to it, I've long been a proponent of the wisdom of educated crowds. Since it's so hard to determine a player's range / skill / true talent level from watching him play a few -- or even a whole host of -- games, the eye test isn't always accurate. And that's particularly true if you only watch one player for those games ... because then you've got no baseline.
When we focus on numbers like UZR / DRS / FRAA, we're not getting the scouting perspective that can tell us whether or not the numbers we see are reflective of a player's true skill level. The numbers can be thrown off by shifts, positioning, and the like, and aren't always reliable in small or larger samples. They're subject to large swings.*
* - See: Bourn, Michael and Bruce, Jay.
Even still, SDI is a positive change. It's flawed, like nearly every metric, but it's adding another educated, intelligent perspective to the mix. The committee might not be perfect for everyone (I'm glad that there are guys like Chris Dial, John Dewan, and Michael Humphreys involved ... but I'd really like to see another catcher defense guy like Ben Lindbergh or Max Marchi or Bojan Koprivica on board), but it's a good one.
If you look at how SDI matches up with the winners, it's actually pretty great that this smart committee, using some of the best available defensive metrics, chose very similar people. It'd be wonderful to actually see HOW the votes were parceled out, who voted for what, etc. Did the 30 votes for SDI candidates all go to the top scorer at each position? Did it markedly affect the voting because the non-SDI votes were split evenly, making the SDI votes the deciding ones? Or are the non-SDI voters all just really dialed-in to the analytics movement? More transparency, please.
At the same time ... Adam Jones.* Mr. Jones was the absolute worst qualifying center fielder based on the SDI rankings. Rawlings gave out the SDI leaderboards to the Gold Glove voters, according to what has been said about the process. Jones sat at the very bottom. And yet, the other voters still gave him enough votes to have him win best AL defensive player at one of the most important defensive positions. So maybe no one else voting really cares about SDI after all?
* - Note: You know what sucks? That we have to kind of hold up Adam Jones as a negative example, both here and in discussions about the Silver Slugger Award, which I also think he wasn't the best candidate for. Jones, by all accounts, is a good person and a very, very talented baseball player. It's a shame we have to use him as an example of a guy who doesn't ":deserve" his accolades.
I guess we don't know yet. It's awfully hard to tell whether SDI is making a sea change in how an "important" MLB award is decided ... or whether we had one year where the perceptions of players among coaching staffs aligned with the perceptions of some of the people who develop defensive metrics.
After all, one year is kind of a small sample size when it comes to defense.
. . .
All statistics courtesy of the Society for American Baseball Research and FanGraphs.
Bryan Grosnick is the Managing Editor of Beyond the Box Score. You can follow him on Twitter at @bgrosnick.