clock menu more-arrow no yes mobile

Filed under:

OBP, SLG, and the Variability of Seasons

Baseball seasons come with a certain amount of variability. However, we only get one season to observe. So how could we answer whether certain hitting styles lead to more variable seasons?


Life comes with a certain amount of variability. How much variability depends on the activity that we're talking about. This has been a theme of mine through several posts, and it's one I want to go back to for a moment.

Baseball is, of course, highly variable. However, there's also one extra difficulty in baseball: Each player-season is unique. We only see each player in a season once, so coming up with the variation in performance that a player could produce is difficult. We don't get multiple replicate seasons (Which are usually needed to assess variability), we just get one.

Player projection is often the area that this comes up in. Before the season starts, the experts put out their thoughts on how a player will perform, giving us a variety of potential seasons. We could take the variation of these projections as the variation in potential seasons, but this would be shortsighted. Often, experts agree reasonably similarly on what a player is expected to do, so the variation of potential season would probably be lower than what it actually is.

Each projection comes with a certain amount of variability. Variability will of course vary with the projected wOBA and the number of plate appearances the batter receives. But do different types of hitters produce different types of variability? Would the low average slugger vary differently from the high average slap hitter?

To a certain extent the answer seems intuitive. A player whose wOBA is more dependent on high power numbers would be expected to higher amount of variation in possible seasons. However, can we (1) confirm this, and (2) quantify this? Of course both of these are quite possible.

The difficulty of this is in "creating" similar seasons (In terms of wOBA) with different approaches (In terms of SLG and OBP) out of thin air. So to ease this difficulty, creating seasons based off of real seasons from the past 10 years is preferable.

Creating and Choosing Seasons

So to begin with, we take all the qualified batting seasons between 2003-2012 from FanGraphs. This is a total of over 1,500 seasons to work with. Specifically, we need these season's stats for the wOBA components (1Bs, 2Bs, etc.). Then, to account for the differences in number of plate appearances, we scale the season to the average number of plate appearances for qualified players, which in this case was 613.

From these scaled statistics, we can calculate wOBA (Based on 2012 constants), OBP, and SLG for each season. Now, from here I divided all the seasons in 19 groups, based on their wOBA. Each interval was length 0.01, and of the form [0.270, 0.280), [0.280, 0.290), etc. The only exceptions to this was the first and last intervals, which were from [0.258, 0.270) and [0.440, 0.460)

Now, within these groups, there were obviously different types of hitters. For example, in the 0.340s wOBA group, there were seasons based on Luis Castillo and Adrian Beltre. They were drastically different hitters, but both were able to get wOBAs in the 0.340s range. With apologies to Aristotle, different men seek after WAR in different ways and by different means.

With this in mind, we look for hitters within each of the 19 wOBA groups who fall into certain categories: Low OBP-High SLG, Mid OBP-Mid SLG, High OBP-Low SLG. The choice of low, middle, and high can be a little subjective, but in my case I defined the low by the minimum value within the group, the middle by the median, and the high by the maximum.

For example, let's look at the 0.350s wOBA group. The low, middle, and high groups are defined by the table below.

Min Median Max
OBP 0.310 0.352 0.400
SLG 0.393 0.465 0.520

From this, we can pick out a few representative seasons. In the Low OBP-High SLG category, we have a season based on Nelson Cruz in 2011: 0.310 OBP and 0.509 SLG. In the Mid-Mid category, we have 2010 Stephen Drew: 0.352 OBP, 0.464 SLG. In the High OBP-Low SLG category, it's Jason Kendall (The King of High OBP-Low SLG seasons) in 2004: 0.400 OBP, 0.393 SLG. We don't stop at these three, but collect all seasons that fall into similar veins — 24 for the 0.350s wOBA group.

In total, we have 294 seasons across all 19 wOBA levels. Each category had at least two representatives within each wOBA level. Now, we have seasons, but we need a method to assess variability.

In order to look at this, we turn to a statistical technique called bootstrapping. I've used bootstrapping in several articles, and it has been used in other articles on the site. What bootstrapping allows us to do is create several seasons from our data by sampling from the results of our actual season with replacement.

In our case, we just are taking that the input "true" season as the player's true ability level at that moment. So, our bootstrapped seasons (All of 613 plate appearances) are all plausible seasons for a player with that ability level. So we can look at standard deviations, ranges, etc. of wOBA and (For the sake of interpretation) offensive WAR. For the remainder of this, we'll look offensive WAR only.

Sluggers Are More Variable

So, in looking at the average plausible range of groups at each wOBA level, we see the following pattern.


Now, when I refer to the "plausible range," I am referring to the difference between the 97.5 percentile and 2.5 percentile of bootstrap seasons. So what we see is that the Low OBP-High SLG group, the "sluggers," have a wider range of plausible seasons. Often, this is wider by 0.5 WAR on both sides compared to the High OBP-Low SLG group, the "slap hitters". This difference is a pretty reasonable margin. Also important to note is that the mediqn (or mean) WAR for all these groups is highly similar, all being around the midpoint for the group.


The standard deviations of WAR show a highly similar pattern. Sluggers are still the most variable, and WAR standard deviation increases as WAR increases. In the end, we can sum this up by confirming the obvious: Sluggers are more variable than slap hitters, while players in neither group fall somewhere in the middle.

Dealing with PAs

The final component of season variability to look into is the number of plate appearances. Without running the bootstrap, I can tell you that as the number of plate appearances increases, our standard deviation and plausible range are going to increase. However, we still want to quantify this for different wOBA levels and batting style groups.

To do this, we select the most representative season from each group for each wOBA level. Then, we just adjust the bootstrap sample size from 613 to whatever amount desired. In this case, I looked at PA totals of 410 to 735 at intervals of 25 PAs. Then, we look at the plausible ranges and standard deviations for each number of PAs.

The results are a little difficult to visualize, as we are now dealing with 3 dimensions. Here, I link to the graphed results for all wOBA levels. Below, I give one representative wOBA level: the 0.350s wOBA range.


As we can see, the variability increases as the number of PAs increases. What is interesting is that all three groups increase at a similar rate. In other words, sluggers don't stabilize differently from slap hitters.

Now, note that because we only have one season, this only gives us a rough idea. We now repeat this across all 300 created seasons that I mentioned above.

To finish it out, I attach the following chart. In it, we have each wOBA level, the OBP/SLG group, and the standard deviation of WAR at each level of at bat. So, to find the plausible interval of the player's season WAR, we take

Project WAR (Or Season WAR) ± 2 × WARsd

So, in the end, we confirm a few intuitive notions with data, as well as providing a general floor and ceiling for players based on their general hitting style.

All season data that created seasons is based off of comes from FanGraphs.