Filed under:

# Valuing Relievers - A Thought Experiment

Some form of Win Probability analysis is probably the most common way for a sabermetrician to measure a reliever's performance.  Whether the stat be simple Win Probability Added, or the more complex WXRL, determining the change in game state seems to be the jumping off point.

One of the flaws of using Win Probability is that all the credit / debit for a given play is traditionally given to the pitcher, rather than the defense.  We know from DIPS theory that this isn't the proper approach.  I'm not going to explain DIPS here (most of you already understand it), but the key point is that most major league pitchers have little impact on whether a ball in play is converted into an out.  Further studies have shown that pitchers do have some influence on the conversion rate depending on the batted ball type - whether the ball is a grounder, fly ball or line drive. Stats like tRA, LIPS and SIERA use the knowledge of batted ball types allowed by a given pitcher to estimate his "true" value.

But these stats don't account for game state or the timing of events, which are the key driver behind win probability analysis, and, as mentioned above, the predominant method for evaluating relievers.  They are also based around likely outcomes, rather than the actual outcomes which form the basis for win probability. So how do we marry the two approaches?  Is there a way to get the benefits of win probability and defense independence?

I have an idea for an answer.  I don't know how correct it is, but since it involves a lot of heavy lifting, and my new motto is "Work smarter, not harder," I thought I'd write up this thought experiment to get some feedback before trying it out.

The approach centers around expected win values for a given batted ball type.  The first step would be to start with each of the 24 base/out states and look at actual results for line drives, ground balls and fly balls, to determine the probability of each expected end state.

For example, say we had runners on first and second with one out when the batter hits a line drive.  Let's take a look at a (fake) probability table for that event.

Base/Out State Outcome Probability
Runners on first and second, two outs 25%
Runners on first and third, one run scored, one out 20%
Runners on second and third, one run scored, one out 15%
Runner on second, two runs scored, one out 9%
Runner on third, two runs scored, one out 5%
No runners on, three runs scored, one out 5%
Inning ending double play 1%

So we know what the possible (fake) end states are from the event, as well as the likelihood that each occurs.

We now apply this knowledge using win probability.  Let's say that our starting point was runners on first and second with one out in the bottom of the ninth with the away team up by one.  Assuming a run environment of 4.5 runs per game, the probability of the home team winning the game is 0.3345.

Let's look at the same table and see what the win value is for each of the possible events.  Keep in mind that the probabilities are fake, but the win values are real.  Also, all win values are from the perspective of the pitcher.

Base/Out State Outcome Probability Win Value
Runners on first and second, two outs 25%  .1638
Bases loaded, one out 20% -.2020
Runners on first and third, one run scored, one out 20%  -.4892
Runners on second and third, one run scored, one out 15% -.5094
Runner on second, two runs scored, one out 9% -.6655
Runner on third, two runs scored, one out 5%  -.6655
No runners on, three runs scored, one out 5% -.6655
Inning ending double play 1%  .3345

We multiply the win value for each event times the probability that the event occurs and get the (fake) expected value of the event: -.2968.

So if a pitcher gives up a line drive in this situation, he gets credit for -.2968 of a win.  The same analysis can be done for a ground ball and a fly ball (as well as a strikeout, walk, HBP, etc.).

Repeat for each plate appearance and sum them up, and you get a win probability metric based on batted ball type.

Of course there are some issues with an approach like this.  The biggest is that we're combining two analytical styles - one based on actual events and one based on the average value of events.  Marrying them together in the same framework runs the risk of breaking both approaches.

For example, I'm not certain that we're doing the right thing in reverting back to the actual game state for all the plate appearances past the first.  I can envision a probability tree where the starting game state for event two is each of the possible ending states for event one and so forth.  But then we're going to have paths that we don't have data for.  Say the pitcher gives up three screaming line drives but they're all right at fielders.  The chances of that occurring are pretty small, so the probability tree would expect many more than three events, but we don't know what batted ball outcome would occur beyond event three.

I think the suggested approach is a fairly good way of combining win probability analysis with DIPS theory.  A better method might be allocating the WPA of each event to the pitcher and the defense, but that is fraught with its own perils.

What about you guys?  Do you think this is worth exploring?  What am I missing, and what improvements can be made?