Voros McCracken is one of the most heralded names in sabermetrics. That is because Voros is the founding father of one of the most revolutionary axioms in baseball: namely, hurlers appear to have no control over hits on balls in play! As avid readers of BtB I am sure that this isn't news to you; in fact, it is almost a basic tenet of our baseball knowledge. But in 2001, just 5 years ago, this denouement was, in the words of Mao Zedong, a Great Leap Forward. Since then various analyses have shown that Voros' sweeping conclusions, while generally true, don't always hold. Indeed over many seasons, and also for a certain type of pitcher (eg, Knuckleballers), BABIP can be controlled. An excellent article by Tom Tippet, of Diamond Mind, details the argument well.

Following this astonishing conclusion, Voros developed a statistic called DIPS ERA (dERA) to measure a pitcher's true ability in naked, defense independent terms: strikeouts, walks, HBP and home runs. This is all well and good but DIPS is actually a very complex beast. What I want to explore for the remainder of this article is how the DIPS formula actually works.

There are three incarnations of DIPS, imaginatively named 1.0, 2.0 and 3.0! Voros built 1.0 and 2.0, and David Gassko, co-contributor to BtB and all round baseball guru, developed version 3.0, which splits out batted ball types. To understand what is going on it makes most sense to start with DIPS 1.0, which is calculated as:

((IP*2.4) + (H*.83) + (HR*11.05) + (BB*2.81) - (SO*1.59))

DIPS ERA = --------------------------------------------------------------------

((IP*0.71) + (H*.244) + (SO*.097) - (HR*.244))

Not simple is it? So how do we break down this equation to work out exactly what is going on? The best way is to carry out thought experiments. By taking a series of imaginary, contrasting hurlers we can set up "scenarios" to help work out exactly how DIPS is influenced.

Let me introduce you to Barry Boring. Barry is a boring pitcher in every sense. He doesn't walk batters, strikes out no-one, and doesn't give up any hits whatsoever. That actually makes him, if not altogether a fantasy, pretty useful; so what is his dERA? Plugging in the numbers we get a DIPS of exactly 3.38. At first glance this seems high. After all, Barry churns out inning after inning giving up no hits, no walks and, by inference, no runs. His ERA would be a big fat zero. Why is his DIPS 3.38? Well, it turns out that Barry is actually the luckiest slinger in the league. You see, every batter that Barry faces hits a ball in play, and his defense manages to get the out. This means that Barry's BABIP is 0. The 3.38, which is derived from the two IP terms in the DIPS equation, sets a baseline as to what the expected dERA is if we strip out BABIP.

Now suppose that Barry's luck runs out, to an extent at least anyway. Let's see what happens if Barry has a league average BABIP, which would be around .290. Whoa ... his DIPS remains at 3.38. Can that be right? Yes it can. Remember BABIP is a function of (largely) one thing: luck. Although Barry's ERA will have increased to a less fantastical number, his DIPS stays exactly the same. By stripping our luck it gives a purer measure of how good a pitcher is.

But why have hits in a DIPS equation anyway? It is a little strange isn't it given that we have said that a hit (ignoring the long ball) is effectively out of the control of the pitcher. To calculate DIPS we use counting statistics (K, BB, HR). So in order to make DIPS equivalent to ERA, which is a rate statistic, we need to have an idea of the number of batters faced. As you can see this isn't explicitly part of the equation; in fact, it is calculated implicitly using IP and hits. This is why both IP and hits have the same coefficient (if you don't believe me divide the numerator coefficient by the denominator coefficients for IP and hits).

On a road trip Barry goes to play the Rockies in the paper-thin air of Denver. Suppose he pitches a complete game and only gives up 1 dinger, what is his dERA both with and without the homer? Well, we know from earlier that without the homer Barry's dERA is 3.38, but with the long ball it rises to 5.23 - a difference on 1.85 per 9 IP (this is called the "plus 1" method). Does this strike you as a little strange? We know that the value of a HR is about 1.4 according to linear weights, so why does DIPS value it at 1.85?

This is where Barry Boring comes to the end of his useful life. Linear weights are designed for a specific run environment - roughly 5 runs per 9 innings. The run environment in our current example is a paltry 1 run per 9 innings; it is no suprise that the HR is worth more. Let's see if we can make more sense of DIPS with a new pitcher, Nate Normal. (Note: this is the beauty of a system such as BaseRuns. BaseRuns accurately models the fundamental scoring in baseball and as such doesn't suffer from problems that afflict run specific systems, like linear weights.) Suppose Nate has pitched 9 innings, given up 6 hits, struck out 6 and walked 2. His DIPS is a very respectable 2.68. Now suppose that one of those hits was lined over the fences, what happens to his DIPS? It increases to 4.11, which is a change of 1.43 - bang on the value of a HR in the current run environment.

We can also use the plus-1 method to check if DIPS values the walk and the strike out in line with linear weights. Using Nate Normal, adding 1 walk sees his DIPS go up from 4.11 to 4.45, a difference of .34, which is the linear weight value of a walk. Finally, we repeat for Ks. But here we find that if add in another strike DIPS falls to 4.21 - a loss of 0. 24. Hang on - linear weights tell us that we should expect a K value of around 0.3! What is going on?

There are actually two factors at work here. First, part of the run value of a K is that it prevents homers as well as hits. Because we already have a HR term in DIPS the K gets a little less credit than it would do in a pure linear weights system. Second, the divisor is IP, not BFP, which also reduces the value of a K. A somewhat complicated article by Kevin Harlow discusses the principle behind this this (albeit for a slightly different example). As you will see if you click on the link, the math is fairly involved but the approach is to re-weight the K based on balls hit in play (largely accounting for the first effect described above).

Those of you familiar with FIP (Field Independent Pitching) should recognize the values above. FIP was created by Tangotiger as a simpler version of DIPS. The formula is:

HR*13+(BB+HBP)*3-K*2

FIP = ------------------------------ + 3.2

IP

Dividing the coefficients by 9 we get the same values as for DIPS above. As such if you want to calculate a rudimentary DIPS then you can do a lot worse that working out a pitcher's FIP (The Hardball Times is a handy, and free, reference for checking out FIP).

OK, what about DIPS 2.0 & 3.0? How does this differ to the DIPS 1.0 formula above? Well, 2.0 is just an evolution on the 1.0 formula, which adjusts for the fact that pitchers *do* have some, but limited, control of BABIP. DIPS 3.0 is altogether more interesting. It was developed by David Gassko who wrote about it a couple of weeks ago here at BtB. The premise is that although BABIP is largely random, some of the constituent batted ball types do have predicative value. There are really three batted ball types to consider: fly balls, ground outs, and line drives. David regresses batted balls along with K, HBP and BB to ERA to estimate the coefficient in the DIPS 3.0 equation. David's equation is:

DIPS ERA =(-0.041*IF+0.05*GB+0.251*OF+0.224*LD+0.316*BB-0.12*SO+0.43*HBP)/IP*9

The first thing that that you'll notice is that there is no HR term. This is because home runs are a function of outfield flies, so the value of the HR is reflected in the outfield fly coefficient. How do we interpret these coefficients? Rather than try to inadequately proffer a precis of DIPS 3.0 I'll link to an article by David Gassko, which describes the method far more succinctly than I could. But like the other versions of DIPS, version 3.0 ascribes offensive value to different events and then normalizes it to ERA.

And that is DIPS in a (small) nutshell. Those of you with a longer memory than I will remember that my last piece looked at the theory behind EqA. A common theme runs through both these analyses: the importance of run value. All good statistics reflect the correct value of the offensive events it tries to measure. DIPS is no different, which is why it is *the* accepted measure of how to best evaluate a pitcher. Over time we can only hope that it becomes more accepted by mainstream baseball media and fans.