The way Earned Run Average is calculated is unfair for pitchers. This is a proposal – although not perfect – that strives to make the metric more accurate.
ERA, as defined by Baseball Prospectus is as follows:
Earned runs, divided by innings pitched, multiplied by nine.
The resulting number gives an indication of how many runs, on average, a pitcher will allow over the course of 9 innings pitched – a full game. It gets dicey when relief pitchers are thrown into the mix. If a starting pitcher departs the game with runners left on base and a reliever allows those runners to score, all of those runs are charged to the starting pitcher. It doesn’t seem entirely fair, does it?
There have been efforts to better measure ERA. The first is ERA+. From The Hardball Times:
ERA+ measured against the league average, and adjusted for ballpark factors. An ERA+ over 100 is better than average, less than 100 is below average. The specific formula divides the league ERA by the pitcher’s ERA (and adjusts for ballpark). So an ERA+ of 125, for instance, means that the league ERA was 25% higher than the pitcher’s ERA (which means that the pitcher’s ERA was 80% of the league ERA). Careful with those ratios.
Although an improvement on ERA, since ERA+ is able to judge a pitcher’s effectiveness in relation to his peers, it still fails to address the inherent inequality in how ERA is calculated. Another ERA-based stat is FIP (Fielding Independent Pitching). Again, from The Hardball Times:
a measure of all those things for which a pitcher is specifically responsible. The formula is (HR*13+(BB+HBP-IBB)*3-K*2)/IP, plus a league-specific factor (usually around 3.2) to round out the number to an equivalent ERA number. FIP helps you understand how well a pitcher pitched, regardless of how well his fielders fielded.
This is nice… but doesn’t really deal with the runs a pitcher allows. It is an “equivalent” statistic. Kind of like aspartame to sugar. They both aim to do the same thing but you can tell the difference. As well, FIP not necessarily based in reality. By that I mean that FIP calculates how well a pitcher should have pitched, now how well he actually did pitch. For example, Brandon Morrow’s FIP is less than Ricky Romero’s. That said, I know who I want starting game 1 of a playoff series (hint: it’s not Brandon Morrow). That is, after all, why we play the games. An aside: Morrow has a better WAR (3.2) than Ricky Romero (2.8). Lol wut.
My proposal is to revise how the ERA statistic is calculated by dividing runs into quarters based on where a relief pitcher’s inherited runners are stationed. For example: if a starter leaves the game with a runner on third base and the reliever allows that runner to score, the starting pitcher is responsible for 0.75 runs and the reliever is charged with 0.25. If the runner is on 2B, the responsibility is divided 50/50: 0.50 runs for the starter, 0.50 runs for the reliever. If the runner is on 1B, 0.75 runs are charged to the reliever and 0.25 to the starter.
If there are multiple men on base, the same formula applies. For example, if a starter leaves the game with a runner on 3B and a runner on 2B and the reliever allows both to score, the 2 runs are divided up as such: 1.25 to the starting pitcher (0.75 for the runner on 3B + 0.50 for the runner on 2B) and 0.75 to the relief pitcher (0.25 for the runner on 3B + 0.50 for the runner on 2B).
Earned Run Averages are artificially high for starting pitchers and (sometimes, depending on the role) artificially low for relievers. This would balance the scales. That said, this presents some issues. Historical ERAs would have to be re-calculated and I’m not sure there is a way to do that without having very detailed game data. I’m sure there are much smarter people than me who would be able to take on the challenge and succeed.
What do you think? Is this a good plan? Too simplistic? Or am I missing the mark completely? Please let me know in the comments section below.