Aaron Judge and Jose Altuve were so intensely locked in battle for the AL MVP during the final parts of the 2017 season that almost nobody stopped to think about just how improbable that pairing was. Not in the purely physical sense of Judge being the skyscraper to Altuve’s midwestern ranch, but in the sense that neither of them were supposed to be in that position.
The players that win MVP awards are often the ones that we see coming. They’re the Bryce Harpers, Mike Trouts, and Kris Bryants of the world; their talent undeniable from the first glance, even if some doubts manifest themselves in the minds of scouts and talent evaluators. Neither Judge nor Altuve fit that mold during their time as prospects.
Less than a year ago, Baseball Prospectus ranked Judge as the 63rd best prospect in baseball and only the 7th best in the New York system. In 2010, the same source had Jose Altuve outside of the top 101 prospects in baseball and just 11th in a pre-tanking Astros system. Neither was expected to be more than merely an average player; both had 2017s that were more than deserving of end-of-season hardware.
Players like Judge and Altuve perplex us. They remind us that, despite how far we’ve come in talent evaluation, there is still no perfect way to predict a player’s future. We aren’t just missing by a little bit, either; for guys like Judge and Altuve, our predictions of their futures were off by miles. This can be humbling—to the prospect writer that all but gave up on Austin Barnes’ bat only to see him have a strong offensive season, for example, or the thousands of White Sox fans that thought without a doubt Avisail Garcia would never be an All-Star. Being wrong is a tough thing to swallow, but it comes with the territory of evaluating hundreds or thousands of players over the course of just a few seasons.
Analytics can tell us a lot about what a player has done, is doing, and will do, but no metric is without its flaws. Using statistics to predict the future of talent yet to reach the big leagues is even more difficult, and often contains even more room for error. Subjective viewings of players by very talented scouts is a good tool, but it often requires many viewings before what is seen can really be trusted as truth. This does not suggest that analysts should quit working to solve these problems, though.
If there were a computer program or brilliantly minded scout that could spit out answers about who the best players will be before they become those players, the game would no longer be fun. The intrigue would be gone, and there would be no reason to watch the games. These players are the Riemann hypothesis; they are dark energy. They’re the problems that attract our attention the most not only despite the fact that they are seemingly unsolvable but because of it. They excite us and challenge us to use every tool in our toolbox to find the answer.
The existence of Judge and Altuve as the unexpected top contenders for the MVP award isn’t an indictment of the scouting community or a glitch in the system. Rather, it’s a reminder that any player at any time can take themselves to the next level. Talent and performance are not continuous; there are jumps and discontinuities that exist that we cannot predict. And that’s a good thing.
Judge and Altuve may seem like they are an unsolvable problem, but that shouldn’t deter us from finding a lesson in their existence. It’s not a lesson in player development but rather in the beauty of the game. They remind us that maybe, just maybe, that player we’ve had our eye on in Double-A is going to be something one day. And we might be wrong. And those with better evaluation tools might tell us we’re wrong. But the hope remains, and it gives us something to look forward to. It’s that silent belief that a player can at any time become one of the best in the league that fuels our passion and fills us with hope.