clock menu more-arrow no yes

Filed under:

Does "Better" Pitching Cause the Batting Average on Balls in Play to Fall? (2nd update)

This week, I analyze more pitchers on this question. What I found initially was that the yearly variation in the batting average pitchers allow on balls in play (BABIP) is partly explained by their defense independent stats or DIPS. Last week I found no relationship, using an idea from one of the commentors on the original post. But this time I get a result similar to what I found the first time.

In my first look, here is what I did

"What I do here is run a regression in which a pitcher's yearly BABIP is a funtion of his DIPS stats, all per batter faced (BFP). The list of pitchers includes the 14 pitchers who had 20+ seasons with 100+ IP. This is so each pitcher has a high number of observations of seasons with a fairly high number of IP. The idea was to see if a guy pitched better in terms of the frequency of strikeouts, walks and HRs allowed, it might be harder for batters to make solid contact and they would pop out more or ground out weakly more often. So I expected the sign on the coefficients for HRs and BBs to be positive and negative on strikeouts."

I found that, in some cases, the DIPS explained some of BABIP.

Here is what the commentor, Guy M, suggested:

"This is an interesting way to look at BABIP's relationship to other skills. Obviously, limiting the study to 20 of the greatest pitchers in history is a real limitation, but still interesting. One way Cyril could vastly expand the sample would be to normalize each variable for a pitcher, compared to his career rate. So in a given year a pitcher might be, say, 107 K+, 98 BB+, 113 HR+, and BABIP 102+ (all compared to his career rate). Then you could include hundreds or thousands of pitcher seasons in your analysis, including a range of abilities, and see what the relationship looks like."

Using this approach, I found no relationship.

So I looked at more pitchers using my original approach. I found all the pitchers that had 16+ seasons with 100+ IP since 1920 (eliminating the deadball era guys when a greater % of HRs were inside the park). In the table below are the results for the new group of pitchers. Again, for many of them, some of their BABIP is explained by their DIPS (indicated by r-squared). But just like my first look, there is no consistency in the signs on the coefficients. I have no idea why a good portion of Ruffing's variation in BABIP is explained by his variation in DIPS while the coefficient on HR% is negative. My thought on this is that as HR% went up, BABIP would go up since pitchers are getting hit harder. For some pitchers, the more the more batters they strikeout, the higher their BABIP. I expected it to be a negative relationship, based on the idea that if they are striking out more batters, the batters will be making solid contatc less often, meaning they will pop up or ground out weakly more often (thus lowering BABIP).

The lack of the signs not consistenly following the pattern I expected might just mean that pitchers are idiosyncratic: what causes BABIP to fall for one pitcher might cause it to go up for another. But, for whatever reason, BABIP seems to be somewhat affected by pitcher controlled stats, even if the direction of control varies for pitchers. And as I mentioned in the first article, given that the fielders behind each pitcher change from year to year, it becomes harder to find that DIPS affects BABIP. Why the study that Guy M suggested (which seems reasonable to me) shows no relation and my angle does, I don't know.

The table below shows the regression results for the pitchers