/cdn.vox-cdn.com/photo_images/3319283/118946246.jpg)
The All-Star vote is a popularity contest. The winning rosters bear more resemblance to the Yankees and Phillies' starting lineups than a collection of the league's best players. It's the best proof we have that democracy doesn't work. Yada yada yada. We've heard it all before.
You can say the voting process is unfair or screwed up or a bad way to pick the teams who will fight for home field advantage in the World Series. But for all its flaws, the All-Star vote is an incredibly useful tool for one reason: it's the best way we have to take the temperature of the at-large MLB fanbase.
This year, baseball fans submitted 32.5 million ballots—more than 1 for every 10 U.S. citizens—and I estimated that they cast 357.5 million votes. If you personally interviewed every single fan who saw a game at your local ballpark this year, you wouldn't even come close to hearing that many opinions.
Over the next few weeks, I'll be looking at what All-Star vote totals can tell us about fans' biases, perceptions, and favorite things to look for in players. But before we get into that, we need a baseline to which to compare the fans' choices—an objective framework for how the voting results would look if not for whatever other factors are pervasive in the fans' collective psyche.
There is no single way to definitively determine All-Star worthiness, but even if there were such a method that you found to be completely satisfactory, not everyone else would agree. No matter how simple it seems or how stupid the opposing arguments are or how much evidence and logic you possess that your assertion is an objective truth, there will always be someone, somewhere who disagrees with you. Crazy as it sounds, there are people who think the world is flat or that George W. Bush is a 12-foot alien lizard or that batting average is more important than OBP. No universal truth is truly universal (except, of course, for this sentence).
Such is the case with All-Star voting. Even if you discount votes from Rockies fans, surely some people would have somehow decided that Jose Lopez deserved a place in the Midsummer Classic, and it's virtually impossible that White Sox fans would have been the only ones to cast their votes for Adam Dunn. So in order to build a framework for All-Star voting, we shouldn't ask which players deserved our votes, but how appealing the candidates in each category are relative to each other. There is no doubt that some people somewhere truly think that Tsuyoshi Nishioka is better than Dustin Pedroia, so we need to come up with an estimate of how many there are.
There is a difference between this expected contrarianism and collective overratedness or bias. For example, Derek Jeter's undeserved first-place finish in the AL shortstop voting this year represented more than just your average amount of collective misguidedness: Jeter's personal popularity and pre-established reputation along with the media exposure and large fanbase that come with the territory in the Bronx had much more to do with his selection than his actual on-field performance. We want to keep these kinds of external factors out of our baseline model, but we don't want to exclude what we might think of as wrong opinions that are shaped by differing perspectives rather than outright bias.
In order to make a framework that included normal dissonance but not player- or team-specific biases, I invented a new tchotchke statistic called Star Power:
It's imperfect, but fWAR is (at least, to me) the best single stat we have for measuring all-around performance for groups of varying types of players. The added 1.5 was to cancel out below-replacement-level players (I took each player's numbers at the All-Star Break, and no player on the ballot had less than -1.5 fWAR), while cubing the results helped to add distance between different levels of players. That I cubed the base was arbitrary, but I found with trial and error that squaring it didn't add enough differentiation while raising it to the fourth power seemed to overdo it.
Then, comparing each candidate's Star Power to his peers at his position, I determined his "expected Vote Share." So, for example, the xVoteShare for a non-outfield candidate in the AL (Player n) would be:
Or, in less-mathematical terms: the proportion of votes a player should expect to receive without the influence of other biases is equal to his Star Power divided by the sum of all the candidates at his league and position's Star Powers. So if there's a category in which the only candidates are Tony Stark (10 Star Power) and Bruce Wayne (5 SP) and there are no fan biases, Stark would get two-thirds of the vote.
How well does it work? See for yourself:
It's far from perfect, but overall I'd say it passes the smell test. Looking at AL first basemen, for example, most serious analysts wouldn't have to think too hard before picking Adrian Gonzalez as the best at his position in the league, but you could still make a case for Miguel Cabrera, Mark Teixeira, or even Paul Konerko. Maybe you'd go with Adam Lind because of his raw talent or Justin Smoak because of his age and cavernous home park. I can't imagine ever arguing that Kendrys Morales (who's missed the whole season) is more deserving of the Midsummer Classic than Gonzalez or Cabrera or someone who's actually played an MLB game this year, but if you get 100 people in a room, one could probably tell you with a straight face that Morales is the top dog because he doesn't have any errors or strikeouts.
On the other hand, the numbers for AL shortstops and NL third basemen tell a very different tale: no one clears the 20% threshold, and a surprising amount of players have decent cases for people to think they have legitimate claims to being the best at their position in the league. But then again, you didn't need these numbers to tell you that. Odd as it sounds that Casey Blake gets a statistically significant amount of expected support, I like that this framework knows the difference between a category in which there is a clear frontrunner who should run away with the voting and one in which there's no clear top dog.
It's also worth noting that (using my estimated vote counts for the players outside the Top 8 at their positions) Star Power correlated with vote totals better than any other statistic I could think of—including all the traditional back-of-the-baseball-card stats and unadulterated WAR.
Of course, this is all comes with a large number of caveats. WAR isn't the end-all be-all, and you can't blindly base your best-players-in-the-game judgments on it alone. Moreover, not everyone waits until July to vote—fans who judge players by their seasons-to-date will vote for the fast starter in April but won't single out the guy who suddenly gets hot in June. And there are some who simply choose their favorite players without the pretense of trying to select the best ones.
It sounds insane to suggest that Vernon Wells and Lyle Overbay should have received any votes at all, but that's how it always works out (when we ask 32.5 million people for their opinions, we're going to hear some things that surprise us) and empirically this framework seems to fit pretty well. This model isn't perfect, but I like it because it knows that we fans aren't, either.