clock menu more-arrow no yes mobile

Filed under:

Should We Use Home Runs to Evaluate Pitchers?

Home runs are as unstable as Delmon Young. Does that mean we should discount them in our DIPS work?

Should Santana get blamed for each of the 39 home runs he gave up last year?
Should Santana get blamed for each of the 39 home runs he gave up last year?
Leon Halip

Defense-independent pitching statistics (DIPS, as it is commonly known) is famous for isolating the three aspects of a pitcher's performance over which he has the most control: strikeouts, walks, and home runs. Criticisms of DIPS theory, and by extension, FIP, most commonly claim that more aspects of a pitcher's performance - that is, non-HR batted balls and sequencing of outs - should be measured when evaluating pitchers. However, over on Lookout Landing yesterday, Matthew argued that FIP should in fact measure less than it currently does, the "less" referring to home runs.

Of course, Matthew does not propose that FIP only measure strikeouts and walks, for such an idea would be absolutely absurd! Absurd, I tell you! No, instead Matthew suggests that instead of home runs, FIP (and other ERA estimators like xFIP and tRA) should measure expected home runs based on both batted ball type and the direction of the batted ball. I'll let him explain:

A pitcher who saw 10% of his flyballs go for home runs in one sample might reasonably have given up anywhere from 0% to 20% in the other sample. There's no telling here. The measurement isn't stable, even after accounting for the park. In fact, it isn't stable no matter how high a cutoff we use within a single season.

To me, that suggests that we should want to at least heavily discount or ideally eliminate the use of home runs when it comes to evaluating a pitcher's ability...

One way of including the home run component without relying on the unstable reality of actual home runs is to ignore actual home runs and instead craft an expected number of home runs allowed from some combination of other, hopefully more stable, results...

In fact, if you split the field of play in half, with a pull side (to the batter's perspective) and an opposite field side, the league's home run per flyball rate is 25% on the pull side and only 3% on the opposite field side. That's a big difference and led me to wondering if flyballs hit the other way should be contributing equally to a pitcher's expected number of home runs allowed. Perhaps a better estimator of home runs could be built by utilizing the direction of the batted balls.

My main issue with this argument is that FIP is not designed to be the most stable metric, nor does stability necessarily go hand in hand with evaluative power. What I mean is this: home runs may be very unstable, which makes metrics like FIP also unstable, but that doesn't mean we shouldn't punish the pitcher for them. It's important not to mistake instability for lack of control; if a pitcher allows a home run, either he did something that he did not mean to do, or his stuff wasn't good enough to beat the hitter. Either way, we should not lessen our blame because the pitcher's home run rate isn't stable.

Nevertheless, the entire piece is quite fascinating and very well-written and well-argued, so I'd encourage you to go read the entire thing. Whether or not the ideas presented change the way that we evaluate pitchers, they are surely useful in how we project future performance.

Topics for Discussion:

  • Should FIP continue to include raw home runs, despite their instability?
  • Is Matthew's idea of using the direction of fly balls to estimate home runs a good one? Are there any flaws in his argument?