clock menu more-arrow no yes mobile

Filed under:

An OPS Question

Over the long run, a team's OPS differential does a pretty good job of explaining their winning percentage. I looked at all teams from 1989-2002. Then I calculated the OPS they had hitting for the whole period and the OPS they allowed their opponents (for OBP I only used hits, walks and at-bats).

I ran a regression with winning percentage as the dependent variable and OPS differential as the independent variable. The regression equation was

PCT =  1.21*OPSDIFF +.5

the r-squared was .935, meaning that 93.5% of the variation in team winning pct is explained by the model. The standard error was .0095 or about 1.54 wins for a whole season.

The table below shows how many more or fewer wins each team had than the model predicts for a 162 game season. A positive number means that a team won more than predicted and a negative number means they won fewer than predicted.

Redsox          -3.81
Mariners  -2.28
Devil Rays  -2.21
Expos          -1.91
Phillies  -1.54
Dodgers  -0.91
Tigers          -0.86
Orioles  -0.52
Whitesox  -0.47
Cubs          -0.36
Royals          -0.32
Mets          -0.28
Padres          -0.22
Indians  -0.20
Reds          -0.19
Diamondbacks  -0.14
Rangers  -0.05
BlueJays   0.05
Marlins   0.30
Brewers   0.32
Rockies   0.53
Astros           0.88
Braves           0.91
Yankees   1.12
Angels           1.14
Cardinals   1.26
Pirates   1.29
Giants           2.06
Twins           2.72
A's           3.69

Most teams are pretty well predicted. But I am curious if anyone can explain the A's and Red Sox? I think over a period of 14 seasons, with personnel and managerial changes (and changes in their strategic philosophies), that no team would consistently win more or less than predicted by very much. Maybe some teams have to be at the end of the scales and it just turned out to be them.

The graph below illustrates the relationship