Welcome to the 2015 trade deadline! To me, this is one of the weirdest times of year, one of the only times the most leisurely of sports feels genuinely rushed. It's also a lot of fun, particularly for a fan of a team no longer competing for a playoff spot because it offers a chance for your team to do something that will actually matter for 2016 and beyond.
In the last few days, it feels like there have been a good number of pitching prospects flying across the country, and something often heard in discussions of those deals is that a player's floor might be as a reliever. There's a sense that for pitchers who don't cut it as starters, relieving is generally their fallback option. For this reason, few legitimate pitching prospects are relievers, even if they project to end up in the pen at the major league level. This is similar to how position player prospects are pushed as far through the minors as possible at the highest possible position on the defensive spectrum, even if they're unlikely to stick there in the majors (shortstop is a good example, as many end up at second, third, or in the outfield).
While the starter-to-reliever transition is fairly common, there's a limited understanding of what impact it tends to have on a player. Results are thought to improve -- stuff and velocity both seem to play up in shorter stints -- but by how much is another question. Obviously some players transition better than others, but I wanted to try to develop an estimate of the differential between a player's results as a starter and as a reliever.
Some attempts have been made previously, like this one by Bryan Grosnick, which examines a sampling of players that underwent a mid-season transition from starter to reliever. The problem with that sort of method is that those players are not chosen at random. Usually, they're not performing particularly well, since no team is going to mess with a competent starter. These players might also have a repertoire that makes the transition easier; an example could be a lefty with a below average changeup and large platoon splits. That means looking at the difference in stats for players who started and relieved probably underestimates the change, since top starters don't move to the relief corps.
There is one exception to this norm: the playoffs. Each season in October, teams shun their fourth and fifth starters to the bullpen, using the extra days of rest between games to concentrate innings among their best arms.
While fourth and fifth starters aren't necessarily "good", they're at least competent, and looking at how their stats translate from starting to relieving will provide a more comprehensive picture than looking only at players who switch during the regular season.
This approach isn't without complications of its own. The degree of difficulty is substantially higher, since these pitchers aren't facing a random selection of hitters but rather are facing the best hitters on playoff teams. What I set out to do, therefore, was calculate the average difference between the regular season stats and the playoff stats of pitchers who were starters in both in order to set a baseline, then calculate the difference between the regular season stats and playoff stats of pitchers who switch from starter to reliever. Combining the two should give an estimate of the starter-to-reliever transition for decent starters.
Admittedly, this still isn't ideal -- rest patterns in the playoffs are different, and some excellent pitchers (e.g., Madison Bumgarner in 2014) might be used more than what's ideal for their own stats, since number one pitchers at 80 percent are often better than number fours at 100 percent. That said, those pitchers are the minority, and this should still give a reasonable estimate of the differential from the regular season to the playoffs.
I looked at three stats for the 2013 and 2014 regular season and postseason -- runs allowed per batter faced (RA%), BBs per batter faced (BB%), and Ks per batter faced (K%). I thought about including home runs per batter faced as well, but for the small sample sizes pitchers go through in the postseason, that value can get really insane, and I don't think it's worth the error bars it would require.
First are the stats for the pitchers who started in both the regular season and the playoffs, 66 pitcher-seasons in total. I didn't want to count any of the relief stints from starters in the playoffs, so for both time periods, this is only looking at their stats as starters.
Unsurprisingly, these pitchers walked more batters and allowed more runs, but somewhat surprisingly struck out the same number of opponents. This is viewing them as a group, however, and this doesn't communicate the average change for each individual. I took each pitcher's differential in each category, and weighted it by the harmonic mean of their batters faced in the regular season and the playoffs before taking the average.
|Reg. Season to Playoff Differential||20.5%||3.9%||-1.2%|
As expected based on the above table and what we know of the playoffs, pitchers declined in all three categories, by runs the most, followed by walks and strikeouts. In 2014, league-average RA% among starters was 11.0%, BB% was 7.1%, and K% was 19.4%; if those stats changed by the amounts in the above table, they'd translate to 13.3%, 7.4%, and 19.2%.
What about the starters who joined the bullpen in the playoffs? Again, I'm only looking at players who started at least one game in the regular season and exclusively relieved in the postseason, of which there were 28 in 2013 and 2014.
|Regular Season (starting)||10.7%||7.9%||20.5%|
These pitchers were, as a group, worse than those who continued starting in the playoffs, which is to be expected.
Their group results didn't look exactly as expected -- after moving to relief, these pitchers gave up fewer runs, but walked more batters and struck out fewer. What about as individuals, using the same harmonic mean weighting?
|Reg. Season to Playoff Differential||-15.4%||33.4%||3.4%|
On average, the individual pitcher saw his runs allowed fall by 15% and his strikeouts increase by 3%, but his walks increase by more than a third. Now, there are obviously big error bars on either side of these figures, since they're informed by relatively small samples, but by subtracting the regular season to postseason differential from the regular season starter to postseason reliever differential, the result should be an estimate of starter-to-reliever differential only.
|Starter to Reliever Differential||-35.9%||29.5%||4.6%|
If we use the same league average starting stats for 2014 and run them through this conversion, the result is a 7.1% RA%, a 9.2% BB%, and a 20.3% K%. Again, I don't think those numbers should be taken at face value, particularly the BB%, but for context, the league average figures in relief in 2014 were 10.2%, 8.6%, and 22.2%.
While this is a fun academic exercise, these samples are too small to draw any real conclusions -- if you rely on this data alone, it suggests that converted starters should strike out fewer batters and walk more than permanent relievers, but give up fewer runs, which doesn't make any sense. If anything, what this might suggest is that the starter-to-reliever conversion is not as simple or easy as it may seem, and less of a guarantee than might be sensed. These pitchers didn't get much of a chance to adjust to working in relief, for the most part, and that could be why the results are more muddled than I initially expected. Or maybe there's just nothing here. Baseball!
. . .
Henry Druschel is a muddled and confused Contributor at Beyond the Box Score. You can follow him on Twitter at @henrydruschel.