clock menu more-arrow no yes mobile

Filed under:

DIPS, Springsteen, and why offense is declining

The run environment in baseball has been in free-fall over the last few years. Is the broadening acceptance of sabermetric ideas to blame?

The commissioner of the E Street League.
The commissioner of the E Street League.
Jason Kempin

Last month, I set off a debate when I published the results of my research on the annual cost of free agent wins. The main focus of the resulting controversy was over the question of whether it is better to use actual observed wins above replacement (as I did) or the results of a projection system (as Dave Cameron does at FanGraphs) as the denominators in calculating the prices of wins. (For those interested, there was also a great discussion of the issue on Tom Tango’s blog.)

One of the reasons I use observed value instead of in-a-vacuum projected value is the heterogeneity of each player’s value across teams — i.e., that each organization’s specific ballpark, roster composition, and non-player personnel will affect how well a player performs with his new team. And over the last few weeks, a train of thought starting with that idea may have led me to an answer to the question of why the run environment is declining.

In short, my argument is this: the more teams use sabermetric numbers and ideas — specifically, defense-independent pitching statistics and the theory behind them — to make personnel decisions, the more they will use this knowledge to collectively maximize run prevention leaguewide.

Indulge me now in walking through a short thought experiment to illustrate the mechanisms at work in turning knowledge of DIPS theory into a lower run environment. Like a good economics student, I’ll start by building a very basic model, then explain how the ideas could be generalized and applied to the major leagues.

Imagine there is an alternative professional baseball circuit called the E Street League (I’m a big Springsteen fan). As they head into the offseason, there are two teams in the league who are looking for starting pitchers: the Jungleland Runners and the Tenth Avenue Rivers. There are other teams in the league too — the Nebraska Badlands, the Youngstown Wrecking Balls, and the Thunder Road Tunnels of Love, just to name a few — but for the purposes of this thought experiment they exist only to ensure that the Rivers and Runners each care more about making their own team better than about making the other team worse.

The Runners and Rivers are in similar financial situations and currently project as equally good for 2014. The major difference between them is defense: the Runners have Andrelton Simmons-caliber fielders at every position, while the Rivers could plug Miguel Cabrera in anywhere on the diamond and see a defensive improvement. (The Rivers make up for their defensive deficiencies in other ways.) Take a second to get them straight, because we’ll be staying with them for a while.

Coincidentally, there are only two starting pitchers available as free agents in the E Street League: Sandy and Rosalita. Sandy and Rosie are identical in every way — let’s say they both project for 4.00 ERAs in the same number of innings in a vacuum — except for how they get outs. Every batter Sandy faces either strikes out, walks, or hits a home run. Rosie, by contrast, gives every hitter she faces the chance to put the ball in play. To the average team, Sandy and Rosie would be equally valuable. Sandy is a lower-risk (and thus a lower-ceiling) option than Rosie, but the expected values of their 2014 performances are exactly the same.

So who will wind up where? As recently as 10 years ago, it might have been a coin flip. But today, despite Sandy and Rosie’s being exactly identical in terms of talent, the Runners and Rivers would each have clear — and opposite — preferences about whom to sign.

The Runners would be better with Rosie than they would with Sandy because their phenomenal defense will turn an unusually large proportion of the batted balls she allows into outs. With the Runners’ prodigious fielders behind her, Rosie would probably beat Sandy’s homogenously projected 4.00 ERA. Meanwhile, the Rivers would prefer to sign Sandy because she would be immune to the ineptitude of their fielders where the equally talented Rosie would suffer. Signing Rosie would allow the Runners to use their great fielders to their advantage, while acquiring Sandy would let the Rivers’ Swiss cheese defense off the hook.

Assume that playing for the Runners would allow the über-contact pitcher Rosie to beat her ERA projection by two runs, while she would underperform her DIPS numbers by two runs with the Rivers. That means if the Runners sign Rosie and the Rivers sign Sandy, their combined ERA will be 3.00 — far better than the combined 5.00 they would put up if the signings were switched. By correctly matching pitchers with teams, everybody wins. (Except, of course, the other teams.)

This kind of pitcher-defense optimization doesn’t just improve a team’s run prevention — it affects aggregate scoring leaguewide. What we will see in the E Street League as Sandy and Rosie sign their deals is two teams allowing fewer runs without any corresponding increase in the number of runs other teams allow. That means fewer runs will be scored in the E Street League than they would if the signings were switched, so the run environment will decrease as a direct result of teams making smarter deals.

Change the specific numbers however you want and the fundamental results will still hold. Say Sandy is so good that the Runners would actually be better off with her than they would with Rosie — i.e., the difference between the two pitchers’ talent levels is greater than that between the two teams’ defenses. Sandy will still sign with the Rivers because the difference between her and Rosie is worth more to them than it is to the Runners, and the two teams will combine to allow fewer runs than they would if the deals were switched. That’s the principle of comparative advantage.

In short, so long as injuries and team-specific win values are not factors, a contact pitcher will always be more valuable to (and therefore presumably sign with) a good defensive team than to a bad defensive team, and a strikeout pitcher will therefore be more relatively valuable to (and therefore presumably sign with) a bad defensive team than to a good defensive team. And leaguewide run scoring will be lower when the market works in terms of this comparative advantage.

We can think about this from the other direction too. Say the Runners and Rivers are now each looking to sign one of two overall equally talented center fielders, one of whom — we’ll call her Wendy — is far better defensively than the other, whom we’ll name Mary. Assume the Runners’ staff is full of pitch-to-contact arms like Rosie and the Rivers are loaded with high-strikeout pitchers like Sandy. Naturally, the Runners will sign Wendy because they will get the most out of her glove, while the Rivers will sign Mary because her lackluster defense won’t matter as much behind their pitching staff (this also fits lyrically). Either way, the net effect is fewer runs scored than if the players and teams were matched without regard to DIPS theory.

Now let’s come back to the real world. There are 30 teams in the majors, all of whom are looking for pitching at any given time (even a rebuilding team will be looking to acquire prospects and develop players). As teams gradually start to consider DIPS theory in evaluating players, greater proportions of personnel choices are made in concert with clubs’ comparative advantages. And the more teams maximize what they get out of their pitchers based on what kinds of fielders they put behind them, the more the run environment declines.

To be fair, this kind of sorting is nothing new. Of course a right-handed power hitter will be more valuable in a place with a short left field fence than a deep one, and you’d rather have a flyball pitcher in a cavernous home park than a bandbox. So why attribute the decline in run environment to DIPS theory?

DIPS theory stands out both for the scale of its implications and for how recently it was proposed. It’s been only 14 years since McCracken first published his research on how little control pitchers have over the fates of batted balls in play. It was just 10 years ago that Michael Lewis described the Oakland Athletics as groundbreaking visionaries for using these ideas to evaluate pitchers. It is impossible to point to a single date when this line of thought took hold around the league, but the paradigmatic shift in how teams think was undoubtedly recent enough that we could still be working our way to a new equilibrium. The same probably cannot be said for the other major causes of the heterogeneity of player values across teams.

As an aside, this kind of sorting will also cause trouble for building DIPS estimators. At the idea’s inception, it was probably reasonable to assume that pitchers with different pitch-to-contact tendencies were randomly distributed across teams with different fielding abilities. But if teams are taking their defenses into account when acquiring pitchers and vice versa, that means team fielding ability will be correlated with both non-defense-related pitching statistics and ERA. This is called omitted-variable bias, and it means the noise from the unrepresented team defense will lead to inaccurate coefficients for the other independent variables. For example, if contact pitchers are being matched with better defenses, it will appear as though strikeouts are less important than they actually are to the average team. In other words, the very defensive impacts DIPS numbers are designed to exclude would end up coloring the estimated effects of the variables they are trying to isolate.

I don’t know if this player-team sorting would actually have a big enough effect empirically to be behind the drop in the run environment, or for that matter whether the market is indeed operating according to the principles of comparative advantage (though surely at least a few teams are). I am, however, confident that putting strikeout pitchers and good fielders where they’re needed most would lead to superior run prevention. And at least on some level, that means broader acceptance of DIPS theory is bad news for hitters.

. . .

Lewie Pollis is a senior at Brown University. You can follow him on Twitter at @LewsOnFirst.

More from Beyond the Box Score: