Defensive metrics have been evolving for years. One of the major uses for defensive metrics is see how many runs a player saves compared to other players in a season. This value is added to the player's offensive contributions and the player's total contribution to his team is measured.
A very familiar measure of this overall contribution is Wins Above Average (WAR) which is track by Fangraphs. The defensive metric that Fangraphs uses in its WAR calculations is UZR (Ultimate Zone Rating). The measure is considered one, if not the best defensive measure available today.
Note: My critique is on UZR, but almost any other defensive metric available today adjusts to a yearly average. I will be using UZR for my example (mainly because I have done quite a bit of work with it and can reference the work), but the writeup applies to any other defensive metric that adjusts to a yearly value.
The problem with UZR (and almost all other defensive metrics) is that they are adjusted each year to an average value. I have seen people, including myself (here and here and here and here), use year to year values of UZR to help predict a player's defensive ability or to show their defensive ability. Here are two examples (one theoretical and a real world one) of how a players UZR can be effected.
Example 1: Say in 2008 all the shortstops stayed the same as the ones that played in 2007, except the top four defensively rated shortstops got hurt and missed the year. They were all replaced by below average defensive shortstops. If a shortstop that was league average defensively in 2007, he would now be above league average even if he and all the other shortstops that played exactly the same in 2008 as in 2007 because of the 4 that missed the entire season.
Example 2. In 2005, the formulas that are used to calculate UZR loved the effort Orlando Cabrera displayed, especially with the Range Factor component of the UZR values. The following chart is Orlando's Range Value and Total UZR for 2004 to 2006.
As you can see Mr. Cabrera's UZR value jumped by 22 points in 2005 and in 2006 it went down to 2004 levels. Want to guess what the other shortstops UZR values did in 2005. Take a look at a chart of the drop off.
As it can be seen in 2006 when Cabrera's numbers lowered, everyone else's UZR values adjusted up close to about what they previously were at.
Besides the two previous examples of a player getting hurt or a player playing significantly better, here is a list of other possible causes:
Small sample size - Unlike offensive stats, defensive stats get broken down into 7 to 9 (depending on the defensive metric if catchers and pitchers are measured) additional categories. This small sample size lends itself to quite a bit of variability.
A player can change positions - This year, Coco Crisp came to the Royals and played center field. This moved David Dejesus over to left field. David was an average to above-average center fielder (lifetime value of 4.6 UZR/150games), but he is a great left fielder (lifetime value of 18.4 UZR/150 games). I would be surprised if the UZR values for left field are down this year and it can all be blamed on Dejesus.
So what are some possible ways to get a year to year UZR comparable values
If you are looking at 1 player, look at 5-6 other players at that position and see if they have the same decrease or increase to player you are looking at has.
Compare the player's UZR numbers to Tom Tango's Defensive Scouting Report .
Find a metric that doesn't adjust to yearly values such as Range Factor. Though Range Factor is not a good method, it might allow you to see if the player was constant or not in the year in question. I have asked around and none of the complex defensive metrics don't adjust yearly. If any reader knows of any metric that does adjust to a yearly value please share.
I hoped to increase the understanding of the limitations of using UZR (and other defensive metrics) when comparing year to year values. As always your comments and suggestions are welcome.