When the Dodgers stormed their way to a 104-win season last year with a payroll almost twenty percent higher than the next-highest in baseball (the Yankees), I decided to root against them in the postseason. There’s no drama when the highest paid team wins. But how often does that actually happen, I wondered? How closely is payroll related to performance? I decided to take a look at the data from 2017 and see what I could find.
Of the 14 teams that have payrolls above the MLB average of $152 million, nine this season had below-average performances: Toronto ($200M), Detroit ($198M), San Francisco ($191M), Texas ($185M), Baltimore ($182M), the Los Angeles Angels ($176M), Seattle ($171M), Kansas City ($158M), and the New York Mets ($154M).
The obvious caveat to this observation is that in some cases, teams likely have to spend a lot of money to try and remain competitive in a difficult division or league. This theory certainly applies to teams like the Blue Jays, who have to contend with the Red Sox and Yankees on a regular basis, but it less clearly applies to the Mets, who are in a thoroughly mediocre division aside from the Nationals, or the Angels, Rangers, and Mariners, all of whom managed to perform below league average while sharing a division with each other. And while the Giants had to play against the Dodgers, Diamondbacks, and Rockies regularly, their strength of schedule alone doesn’t account for their MLB-worst (tied with Detroit) 64-win performance despite the sixth-highest payroll in baseball. Detroit’s payroll was even higher than San Francisco’s and its division competition weaker. For the "let’s go buy championships" bunch, this is a problem.
I love graphs. The below one correlates team payroll and team wins, with the blue dashed line representing the correlation trend. One way to look at it: if a team is above the blue line, they outperformed the expectations based on the league trend of payroll/win correlation. If a team is below the blue line, they underperformed. The distance between any given team and the blueline represents the magnitude of their deviation from expectations. Cleveland, for example, vastly overperformed (102 wins despite only the 13th-lowest payroll), and Detroit similarly underperformed (64 wins and the 5th-highest payroll).
Every correlation has what we call a coefficient of determination, or R². The R² value measures the amount of the variance in dependent variable outcome that can be explained by the independent variable. In layman’s terms for this model: how much of the differences in wins between teams can be explained by their payrolls? The R² here is 0.123, meaning that 12.3% of the variation in win totals can be explained by payroll differences. That’s way smaller than what I thought it would be when I set out to evaluate this.
When I was pondering why this might be, I thought about how front offices often go pursue the best player available without knowing exactly how that player will fit an existing clubhouse mentality or positively contribute to team chemistry. One way, I thought, to test this theory would be to correlate team payroll and Wins Above Replacement (WAR), controlling (or at least trying to) for team performances while evaluating the returns that front offices received from their players individually. The result is below:
This was surprising to me. With an R² of 0.104, even less (10.4%) of the WAR variation can be explained by payroll fluctuation. (A brief aside: one thing eye-catching about this chart is how WAR tracks with wins. Compare the two charts side by side and you’ll see what I mean — the Padres, for example, look alright on the wins chart but dwell in the cellar of the WAR chart, relative to expectations.)
This topic deserves much, much deeper exploration, but hopefully this illuminates some of the data. I’d eager to figure out a way to control for injuries in this model, in order to determine how strong the correlation would be absent unexpected developments like Bryce Harper slipping on a wet first base. If you have suggestions, let me know.