/cdn.vox-cdn.com/uploads/chorus_image/image/2495017/146082137.0.jpg)
Last week, I delved back into one of my favorite baseball research topics: The wacky world of single season BABIP (batting average on balls in play).
In that article, I discussed how randomness (luck) has the greatest effect on a BABIP on a per season basis. I also noted the fact that both defense and pitching skill have some (substantially) smaller effect on the number.
I tested both strikeout percentage (K%) and ultimate zone rating (UZR/150) for pitchers who threw at least 20 innings from 2003-12 in an attempt to find out how much each of those factors explained the variation in BABIP.
I came up with these two correlations (r):
Factor | r |
---|---|
K% | 0.2 |
UZR/150 | 0.13 |
Neither correlation was very strong, but both relationships were significant, which was interesting and backed prior research done on the subject.
This study was not perfect, thus some suggestions were made for changes with the study, as well as other research into the issue.
The first suggestion came from Tom Tango of the Book Blog. In Tango's response to my piece he made this comment:
Glenn's study is subject to his sample, and unfortunately, his sample included everyone with at least 20 IP (ostensibly 60 BIP). So, that more than anything is going to drive the results. Remember, the larger the number of trials, the higher the r. That’s because the larger the number of trials, the less random variation has an impact, and the more signal can rise above the noise.
I'd like to see the study redone with pitchers with at least 300 balls in play.
I made a few changes to my sample to oblige Tango's request.
First, the goal of this piece was to look primarily at the relationship between strikeouts and BABIP, so I only tested strikeout percentage, and left UZR/150 for another day. Secondly, I made the sample only 2005-12, because as Tango noted, "more trials leads to higher r".
I ran a linear regression for pitchers who did not change teams with at least 300 BIP over the years 2005-12 (n=1034) with this result:
As should probably be expected, the smaller sample resulted in a reduction in r (.20 to .10). The reduction was larger than I expected personally, but maybe I am overestimating the negative effect that strikeouts can have on BABIP.
Not all BABIPs are created equal. I found the idea of comparing different pitchers' BABIPs across seasons, leagues, teams and parks into one large sample, to be rather silly.
Wouldn't it make more sense to run this test against the team they played for in a given season?
What I mean by this, is that I would expect a pitcher who had a higher K% to induce a lower BABIP than their team's (excluding that pitcher) BABIP, or a pitcher with a low K% would have a higher BABIP his team's BABIP. For example:
In 2012, Justin Verlander had a very high strikeout percentage (25 percent) and his BABIP against (.273) was 34 points lower than the Tigers' team BABIP (.307), which included Verlander! When Verlander is excluded, their team BABIP rose to .313, 40 points higher than Verlander's.
So, I used the same sample from earlier in the article to see if this relationship between K% and the difference between individual and team BABIP was, in fact, real:
Much to my surprise, strikeout percentage did not explain this difference very well. The correlation between the two numbers was just .05; which is much lower than I would have expected. However, there is one underlying factor that could be confounding things here.
An individual pitcher has zero effect on his team when he is not on the mound.
Using just a pitcher's ability to strikeout batters out to explain the difference between his BABIP and the BABIP against his team when he is not on the mound, may be an extremely foolish venture.
Why should there be a correlation between those things?
This result brought me to a suggestion that came from Beyond the Box Score's very own, James Gentile. In the comments of my original piece, James asked what the simple correlation was between a pitcher's BABIP and his team's BABIP (excluding that individual pitcher).
I expected the result to be fairly high, partly because I've heard this argument for years:
Zack Greinke's BABIP was higher than average in most years, because the defense playing behind was not very good. His team as a whole had a high BABIP; thus, we should expect Zack's to be as well.
So, I tested James' idea using the same sample, which yielded this result:
The correlation between an individual's BABIP and the team's BABIP (excluding him) was only r = .16. That result is still significant, but not nearly as high as one would expect.
The largest finding that came out of these results was not even the correlation, though.
I found that on average a pitcher's individual BABIP is .019 away from his team's. This number would sound small, except when you're discussing something like BABIP.
In a sense, it means that from 2005-12 a pitcher who yielded at least 300 balls in play would have a BABIP either 19 points below or above his team's on average.
For example, if a team's BABIP, excluding one starter,was .300, it'd be just as reasonable to expect that starter's BABIP to be .319, as it would be to expect it to be .281; which is a pretty large difference.
So what's the point?
The goal of this piece is not to get people to move away from their thinking that pitchers, like Greinke, who play in front of really good (or bad) defenses, should have BABIPs that move with that defense.
Instead, my goal is to reiterate a point that has been made time and time, again.
It's impossible to explain one season of BABIP. The number is affected by way too many factors, and the main one, of course, is random variation; LUCK!
How do you explain luck?
You cant. And maybe we should stop trying to explain the unexplainable.
But for me, it's at least fun to try.
All statistics come courtesy of FanGraphs.
You can follow Glenn on twitter @Glenn_DuPaul