The line is a two-year moving average.
This year, the National League had its best showing in interleague play since 2004, when the leagues almost exactly split the season series. The American League's overall record this year was 134-118 (.532 W%), which is almost exactly matching their PythagenPat record of .529 (1168 RS vs. 1098 RA). It is the second consecutive year that the NL improved its record vs. the AL, which hasn't happened since 2002-2003. As a National League fan, and as someone who tries to assess the league quality differences in the power rankings, this is terrific news. The ideal would be that the two leagues are equivalent; the NL may still be behind, but it seems to be making a comeback.
What's happening? We've talked before about some of the explanations for the American League's dominance over the past half-decade in interleague play. Once you dispense with the rule differences driving the entirety of the effect (please see this article first before you start ranting about DH advantages for AL teams), most explanations have converged on American League teams simply having better management. I think what we may be seeing is the National League teams starting to play catch-up, employing better use of fiscal resources via the draft and amateur free agent signings, making smarter moves based on better scouting and statistical information, etc. The Pirates and Reds are great examples of this shift, even if it hasn't paid off yet for their major league club.
...the rest of this is more geeky, for those interested in quantifying/estimating league disparities. Feel free to skip!
How does this news affect the power rankings? Over the past two years I've applied a 40-run per season boost to the run differential of AL teams (20 runs added to runs scored, 20 runs subtracted from runs allowed), and a 40-run penalty to NL teams when determining our Team Performance Index (TPI). This is based on several convergent sources of information, including interleague records as well as studies of players who switched teams. Do we need to reduce this league adjustment to account for this year's apparent NL resurgence?
Maybe. Let's just assume for a minute--and it's a big assumption--that the .532 W% posted this year represents the true talent gap between the two leagues. Using the odds ratio, we can estimate that a 0.532 W% would result when a true talent .516 W% team (the AL) played a true talent .484 W% team (the NL). Assuming a fairly typical MLB run environment, and using PythagenPat, this would mean that an AL team might be expected to score 760 runs and allow 734 runs when playing 0.500 teams, while the NL team would do the reverse. Based on this, and accounting for the fact that teams play 10% of their games in interleague and thus those games do not need adjusted in this manner, I estimate that we should apply a 23-run/season boost/penalty to the run differentials of teams instead of the 40-run adjustment I've been using in the power rankings. Pretty big difference!
But that might not be fair. This year might be an aberration, and we may see the AL return to its typical dominance next season. Maybe there were some match-up issues going on this year that tweaked the records to favor the NL more so than is typical? A more conservative approach would be to take a 2-, 3-, or even 5-year average of interleague records and use that as the basis for our adjustment. Here's a table showing how doing so would affect our league adjustment:
|Time Span||AL W%||Estimated AL True Talent||Estimated NL True Talent||Per-Season Adjustment to Run Differential|
This means that the currently-used league adjustment of 40 runs is (exactly!) appropriate if we use a 3-year moving average of interleague records to estimate league talent levels. But it's not appropriate if we only look at this year or the past two years.
I'm honestly not sure what to do here. My inclination is to err toward the smallest league adjustment above, if only because using interleague records is an imperfect measure of league quality differences. It sort of builds in some "regression" (in the loosest possible sense), in that we're leaning toward evenly matched leagues in our decision making process. So, I'm leaning toward pulling the adjustment down to 23-runs instead of 40. That said, I've argued in the past for using multiple years as a more accurate, predictive measure of league differences, and so I'm reversing myself here because it gives me a more palatable result. That's not exactly a scientific approach.
...what do you folks think?