/cdn.vox-cdn.com/uploads/chorus_image/image/17509131/20120613_jla_bs4_240.0.jpg)
Using the current definition, there have been 23 perfect games thrown in the rich and colorful history of baseball. With this perfection comes an immortality that at times defies a player's complete body of work. The fact that there are some surprising names that are part of this most selective of fraternities, as well as a few names conspicuously missing, is what inspires some of the thought process that brings us together here at our cozy internet outpost of Beyond the Box Score.
How do they do it? What, aside from the requisite 27 up, 27 down, occurs to achieve this most impressive of baseball feats? Can we glean any comparisons from those who have accomplished this feat and apply them to those who have come so achingly close?
Let's give it a go right now. Using some of the thoughts discussed here previously at Beyond the Box Score regarding what makes a no-hitter so special, but sometimes also *not* so special, let's take a different look at what makes perfection and almost perfection similar, but also so painfully different. Are the differences, when we control for walks allowed, more related to what pitcher controls or what the defense behind him dictates?
First, let's get some data querying items out of the way. Using FanGraphs and Baseball-Reference data sets, perfect game and complete game, one-hit, no walk, nine inning performances from 2002 to present were collected. From there, we will look at a handful of statistics to help parse out the differences between our groups (aside from the obvious hit allowed). In particular, out types -- ground balls, fly balls, and strikeouts, as well as strike types -- swinging strike and called strike (labeled "looking" below) and strike percentages will be looked at and compared across the two groups (labeled no-hit and one-hit), as well as with each pitcher's season that the perfect / almost perfect game occurred. Season stats are labeled with the suffix '_yr' to distinguish them from the game stats. This is done not only to compare and contrast the relevant stats in an effort to find differences between the groups, but also to see if there was something different about the pitcher's outing compared to his average start that season. Was he throwing more strikes or inducing more ground balls than usual? Was there some other method to his madness that could help explain his gem of a game? These sorts of questions will be answered and (hopefully) will help guide any inferences made as to what made or broke an outing.
With that, let's look at some data. The first table breaks down the average game and season out states between perfect and almost perfect games.
Hits | GB | GB/9_yr | FB | FB/9_yr | K/9 | K/9_yr |
---|---|---|---|---|---|---|
0 | 7.43 | 11.28 | 9.43 | 9.67 | 10.14 | 7.47 |
1 | 11.14 | 11.82 | 10.14 | 9.83 | 6.9 | 7.69 |
Let's ruminate on these values for a moment. Right away, we see some differences between perfect and almost perfect games: there are fewer ground balls (GB) and more strikeouts (K/9) in perfect games compared to almost perfect games. While one-hitters have a slightly higher number of fly ball outs (FB), there doesn't appear to be an appreciable difference compared to perfect games. Turning to the season rates between our two groups, we don't see any huge differences; on the surface, our groups look alike and don't appear to have any drastic differences with respect to how they obtain outs, at least in the season of interest.
Sticking with this table, now let's turn our attention to within-pitcher differences -- did what a pitcher do in his perfect / almost perfect game deviate from what he normally did over the season? Essentially, we are answering whether the outing was a huge aberration, or if this was business as usual with respect to how the pitcher got his outs. Again eyeballing out averages, we see that while for one hit games it appears to be business as usual, those with perfect games did strike out more hitters on average than their typical outings and compared to their almost perfect counterparts, while at the same time, also inducing fewer ground ball outs.
In general, we see a pattern forming -- in perfect games, pitchers are making a handful of outs that are typically grounders and taking matters into his own hands and striking batters out. When you run the proper flavor of statistical hypothesis test, we come to find statistically significant differences between ground ball and strikeout rates between perfect and almost perfect games, as well as between ground ball and strikeout rates between game and season for an individual pitchers in perfect games. No statistically significant differences were found within almost perfect pitchers and their game and season rates.
What we have so far is a lot of evidence pointing towards strikeouts playing a large role in defining the distinction of a perfect game from an almost perfect game. Let's dig a little deeper and take a look at strike data.
With base on balls held constant across our groups at zero, we now can break down the remaining variable that is affected solely by the pitcher, making an initial assumption that umpires are calling strikes strikes, and balls ... balls. With that caveat in mind, what can we learn about the types of strike being thrown?
Hits | Strike | Strike_yr | Strike Looking | Strike Looking_yr | Strike Swing | Strike Swing_yr |
---|---|---|---|---|---|---|
0 | 68.71 | 65.86 | 29.46 | 18.29 | 12.27 | 9.54 |
1 | 68.57 | 64.48 | 28.06 | 17.62 | 9.63 | 9.26 |
Here with our second table, we again are looking at differences across perfect/almost perfect groups, as well as within the pitcher, in terms of game results in comparison to season averages. However, in contrast to our first table, here, we are looking at percent rates, out of simplicity and ease of data retrieval. Essentially, looking at percent changes allowed me to perform one or two fewer steps of data massaging, since both of our data sources present this data in percent form. Looking across the perfect / almost perfect groups, we see no remarkable differences between the two in terms of the number or types of strikes thrown. Overall, both groups are throwing a lot of strikes, with a slight nod towards the perfect gamers with respect to the number of bats missed in the form of swinging strike percentage. Looking at their respective season data, we again don't see a huge difference between the two.
When we start to look at within pitcher differences, we begin to see some disparity. With the perfect gamers, we see a large jump in called strikes (Strike Looking) during their perfect game as compared to their season average, with the same phenomenon seen in swinging strike rates. A small jump in overall strike rate is also seen, but isn't terribly egregious. For the almost perfect group, we see the exact same trend. However, when applying a statistical test of significance, we come to find the only significant changes are both groups in their called strikes compared to season averages, and the overall strike rate for almost perfect games compared to their overall season.
So we have some reasonably interesting findings just using tests of statistical significance, in our case, t-tests -- let's now go one further and see if we can take what we've learned and use logistic regression as a way to possibly predict the outcome of a perfect game. I will be brief and say that our previously seen differences hold true when modeled independently of one another, but when modeled together, things fell apart a bit, in particular when modeling the out types together. This came to be due to a couple of reasons, one being a puny sample size, but the main one being the concept of quasi-complete separation of the data (particularly the ground ball stat); our outcome variable (perfect game or one hitter) separated our predictor variable (ground ball outs) almost completely, hence the phrase 'quasi'. For groundball outs, we see an almost perfect delineation of the no-hit and one-hit states simply by the number of ground outs made -- for our data set, that imaginary line is right around five. We also see this occurring in the strikeout and fly ball variables, but the phenomenon is much less pronounced. When a logistic regression is performed on the out type season data and any of the strike type data, we get no statistically significant results that would lead us to a predictor that could distinguish between a perfect game and just a really well pitched one-hit, no-walk, complete game.
With statistical gymnastics out of the way and some analytic hiccups in the process, what can we say about our data, in anything?
Overall, the data shows us that, not shockingly, the difference between a hit and pitching perfection is largely dependent upon the pitcher keeping the ball out of his fielder's gloves -- we saw a statistically significant increase in strikeouts and a concomitant drop in ground ball outs in perfect games, both between the perfect / almost perfect game states and compared to a pitcher's season in general. Perfect games tend to be infrequent, statistical outliers, with strikeout and ground ball rates that appear to be anomalous and unsustainable. With respect to the strike data, we can't help but think that, despite their best efforts, there is a helping hand to be lent by the umpire and his interpretation of the strike zone. In the end, no one particular pitching style or stat helps explain all of what it takes to achieve historical perfection, although we can generalize and say they the player who finds it easy to convert ground ball outs into strikeouts, all while missing bats, will have the best shot at becoming the next Dallas Braden.
Or Mark Buehrle.
Philip Humber?
OK, the next Matt Cain.
. . .
All statistics courtesy of Baseball-Reference and FanGraphs.
Stuart Wallace is a writer at Beyond The Box Score. You can follow him on Twitter at @TClippardsSpecs.