The last time I wrote about game theory, I discussed the strategy involved in bunting for a hit. One surprising observation was that the payoffs for bunting for a hit and swinging away appear not to be equalized. The reason this was surprising was because if the payoffs were not equal, we expect a rational decision maker always to choose the one with the higher payoff unless and until the payoffs equalize.

Surely this surprising bit of non-rationality in baseball is limited to the rare case of bunting for a hit? No, and don't call me Shirley.

**Table of Contents**

The Example

The Evaluation

The Caveat

Discussion Question of the Day

Today's example comes courtesy of a paper written by Kenneth Kovash and Steven Levitt, entitled "Professionals Do Not Play Minimax: Evidence From the National Football League and Major League Baseball." I'll get around to explaining what the heck minimax is, but let's start with the example.

(I should also add that the paper itself, because of idiotic rules regarding the freedom of academic work and the journals in which they are published, is not freely available. For more information how you can help change this knowledge-limiting fact of law, see here.)

First, we begin with the assumption that pitchers have more than one pitch type. Next, we observe that pitchers have differing run expectancy values for different types of pitches. I'm sure Harry could explain this much better than I, but let's just for the moment use Zack Greinke's 2009 as an example (and oh what an example it is). Per Fangraphs:

Runs above average per 100 pitches for each pitch type:

Fastball: 1.31

Slider: 2.86

Curveball: 0.25

Change up: -0.87

These values are relative to the count and the type of event, but do not correct for defense. Additionally, they use the pitch classification from BIS, which I understand has its limitations. You might argue that we should regress these results to the mean if we're going to use them to make a decision on what pitch to throw next, and that's probably true too.

In any event, these values should give you an idea that the run expectancy of a pitcher's pitches is not the same for each pitch.

So the question is, why doesn't Greinke throw more sliders? If they're worth more than twice as many runs on a per-pitch basis, he should clearly be throwing them more, right? And the fact that the change up has a negative run expectancy indicates he should throw fewer, doesn't it?

The game theory reasoning would go something like this. We have a game where batters largely have to decide before the pitch is even thrown which pitch to "look for." So the decisions of the participants are independent but the outcomes depend on the decisions of both actors. Classic game theory scenario.

Under this structure, it appears that either: (1) Greinke's slider is so good that even when hitters expect it, they can't do much with it, or (2) they don't look for it very often. In either case, it would be better if Greinke threw more sliders. Certainly, it would reduce the strength of (2), most likely reducing the run expectancy of his slider. But it would raise the run expectancy of his other pitches in just about equal measure.

Thus, we ought to expect an equilibrium where all of Greinke's pitches have similar (if not identical) run expectancies. And yet, the data are as plain to you as they are to me: his slider is by far better at preventing runs. What gives?

Kovash and Levitt concern themselves with the expectations we should have of batter-pitcher interactions if they were governed by what is called a "minimax" theory. Minimax is a mixed strategy solution to a two-player, zero-sum game, which is essentially what we have with batter-pitcher strategies. The goal of a minimax theory, theorized by mathematician John von Neumann, is to minimize the maximum possible losses, thus creating a stable equilibrium between the two players. (The minimax solution, in games that are zero-sum, is equivalent to the Nash equilibrium.)

In their paper, Kovash and Levitt tackle several questions, including the one posed above. But they also wonder whether pitch selection is completely unpredictable. A sprinkling of baseball traditionalism will tell you that it is not: pitchers rarely throw curveballs in 3-0 counts. And their findings support this observation:

If the pitcher threw a fastball on the last pitch, all else equal, it lowers the likelihood this pitch will be a fastball by 4.1 percentage points. [...] If the last pitch was a slider, the likelihood that this current pitch is a slider falls by two percentage points, or twenty percent.

So pitchers do not throw their pitches in random sequence, which means that (at least in theory) batters can exploit patterns.

But what about the fact that run expectancies are not equal, regardless of sequencing?

If a pitching staff were able to reduce the share of fastballs thrown by 10 percentage points while maintaining the observed OPS gap on fastballs, this would reduce the number of runs allowed by roughly 15 per season, or two percent of a team’s total runs allowed. Because of behavioral responses by batters, this is likely to be an upper bound on the cost of teams throwing too many fastballs.

Now, we ought to not expect the OPS gap to persist even as teams threw fewer fastballs (throwing fewer ought to increase their effectiveness), but nevertheless the finding that pitchers throw too many fastballs is attention-grabbing.

(If you'd like to get really angry about the fact that Kovash and Levitt used OPS in their analysis instead of the more reasonable choice of linear weights, Tom Tango has got you covered.)

But Phil Birnbaum has a very interesting bone to pick with Kovash and Levitt. As he puts it:

How can you tell, using game theory, whether fastballs are being overused? Simple: you just check the outcomes. [...] But it's not that simple: as soon as the opposition realizes that you're not throwing fastballs, they'll be able to predict your pitches more accurately [...]. Game theory can't tell you the right proportion, at least not without having to make assumptions that would probably be wrong. But it *can* tell you that you should adjust your strategy until the OPS-after-fastball is exactly equal to the OPS-after-non-fastball.

If that's what the Kovash/Levitt study did, it would be great. But it didn't. Instead, it did something that doesn't make sense, and makes almost all its conclusions invalid.

What did it do? It considered outcomes only for pitches that ended the "at bat". (The authors say "at bat", but I think they mean "plate appearance". I'll also use "at bat" to mean "plate appearance" for consistency with the paper.)

Kovash and Levitt aren't quite as unaware of the problems Phil underscores as he makes them out to be. From their paper:

If there are no spillovers across pitches, there should be no difference in outcomes across pitch types if the pitch does not end the at bat. To the extent, however, that fastballs are slightly more likely to generate strikes than non-fastballs, throwing a fastball may provide some benefit to the pitcher when the at-bat does not end with the current pitch.

But Birnbaum's point remains, and I do not have a good explanation for why Kovash and Levitt use OPS nor for why they ignore outcomes that do not end the plate appearance.

Certainly, if we are to give any reason for why Leo Mazzone was such a good pitching coach (other than the astronomically good luck of having Maddux, Glavine and Smoltz under your tutelage), it was because he got his pitchers to "pitch off" their fastballs.

Nevertheless, even if we reran the regression with the proper changes, as Phil suggests, I'm confident we'd find differentials in the run expectancies of various pitch types. And that just doesn't seem to jibe with what game theory tells us ought to happen.

One possible explanation is that pitchers and pitching coaches are not aware of the differentials. Much of game theory, and certainly minimax strategies, rely on each party knowing the payouts for the various choices of both parties. And that just doesn't seem to be the case in professional baseball. While teams study tape and attempt to exploit weaknesses, it does not appear that many (if any) teams have payout matrices on index cards (like Earl Weaver used to keep his stats).

**Discussion Question of the Day**

Do you think that this line of inquiry could bear fruit for major league teams? Ought they do game theoretic analysis and provide it to their pitching coaches? Or are the assumptions necessary to construct a game like this too attenuated to produce any real world benefits?