clock menu more-arrow no yes

Filed under:

How Much Clutch Pitching is Really Going On?

New, 1 comment

A couple of weeks ago a commenter criticized some research of mine on clutch pitching. They suggested that if you only look at pitchers who rack up alot of innings and then run regressions with their ERAs depending on their OBP and SLG allowed it will be hard to find very many of them being clutch. If high IP pitchers generally are better in the clutch than other pitchers (one reason why they might be allowed to pitch more innings), looking only at them leaves out all the "chokers." So if you regress ERA against OBP and SLG you will not be likely any evidence of clutch pitching. I tried a couple of things to account for this and I still don't find much clutch pitching going on.

If you run a regression with ERA as the dependent variable and OPS (OBP + SLG) allowed as the independent variable, the regression finds the best possible relationship between them in the form of an equation. Since you are trying to find the "best fit," some pitcher's ERAs will be under-predicted and some over-predicted (in a good regression those two groups should be close in their total number). A good clutch pitcher will have a lower than expected ERA and vice-versa. If there are about the same number of pitchers in each group, it might seem that the chokers cancel out the clutch pitchers. That tends to lead to the conclusion that maybe it was just luck where a guy landed (being over or under predicted) and there is not much clutch pitching going on.

Also, if the regression fits well, very few pitchers' predicted ERAs will be very far from their actual ERAs. So if the actual ERA is close to that predicted by OPS (or even OBP and SLG as separate variables), a pitcher probably seems to do about as well in the clutch as he does otherwise.

But if you only looked at guys who really are good in the clutch, the regression analysis cannot find a relationship between ERA and OPS (or OBP and SLG) that reflects the true run value of OPS since the "bad" clutch pitchers are left out. To try to get around this problem, I used an equation for ERA based on team level data. My initial study, Do Pitchers Give Up their Expected Number of Runs Based on OPS?, looked at pitchers who had at least 1000 IP from 1991-2000 (we have to assume that they tended to be good pitchers who fit the problem I have described). I calculated a predicted ERA for each of them not based on a regression equation using them as the data but using team leve data (ERA, OBP & SLG allowed) from this period. The regression equation was

(1) ERA = 13.48*OBP + 11.13*SLG - 4.73

That is actually pretty close to the equation found when using the individual pitchers in the regression which was

(2) ERA = 11.93*OBP + 10.21*SLG - 3.91

Then I predicted the ERA for each individual pitcher in the study using equation (1), found out how much that differed from their actual ERA and found the average difference for the whole group. It was about -0.13, meaning their ERAs were generally lower than expected. You could interpret this as evidence of clutch pitching. But over a full season of 225 IP, that is just about 3.28 runs saved. So if it is clutch pitching, it is not having a strong effect. Out of 59 pitchers, only 10 saved more than 0.25 runs per 9 IP. The best clutch pitcher was Mike Hampton, who saved .45 runs a game or about 11 a season (he was the only one who saved more than 10 per season). It usually takes about 10 additional runs to win one more game over the course of a season, so it does not look like there is much clutch pitching going on. I also did not make the corrections for handedness and strikeouts that I did in the earlier study (that brought Hampton's runs saved per game down by about 0.10 runs per game). But, again, part of the deviations from expected ERA could be luck, too. If so, that would mean that even less clutch pitching is going on.

I tried one other way to adjust for this bias problem. I took all pitchers who had at least 100 IP in their career from 1996-2005 and started 80% or more of the games they pitched in. Then I ran a regression in which runs per 9 IP (R/9) was the dependent variable and HRs, non-HR hits and BBs per 9 IP were the independent variables. The equation was

(3) R/P = 1.38*HR + 0.546*NONHR  + 0.319*BB - 2.29

Then I predicted each pitcher's R/9 and found the difference between that and his actual R/9. I had 165 pitchers. The average difference for the whole group was -.002. But did the pitchers who pitched the most innings have a different result than those who pitched the fewest innings? Yes, but the difference seems to be small. The top 30 in IP had an average differential of -.069. The lowest 30 in IP had .020. That gap between those two is just .089 or about 2.25 runs for a full season. So if the guys who pitch more are allowed to do so because they pitch better in the clutch, it is not making much difference.

There is one problem with my group of 30 pitchers who pitched the fewest innings. Some of them had already been pitching for a long time by 1996, so it possible that they were actually good clutch pitchers and they keep this bottom group from looking bad. Some of them are just starting their careers and will maybe learn to pitch in the clutch. But I did find 8 of them who did not start their careers until 1996 and did not pitch in 2005 or 2006. So assuming that their careers are over, they are a good group of potentially "bad" clutch pitchers to look at. But their average difference was -.012, meaning they gave up fewer runs that expected (although not much more). They were just plain bad pitchers, giving up 5.61 R/9.

Getting back to the top 30 in IP, Tom Glavine had the biggest negative differential. His predicted R/9 was 4.11 while it actually was 3.68. So he gave up .427 runs per 9 IP than predicted. Only two others, Livan Hernandez and Ismael Valdes, saved even as many as .25 R/9. In another one of my earlier studies, The Accuracy of Component ERA, I also found that there was very little clutch pitching on. As far as I know, the component ERA formula that Bill James uses does not suffer from the bias that I explained earlier. In fact, most of the long-career pitchers I looked at in that study actually gave up more runs than expected, meaning they were not clutch at all.