Pitcher Use

Let’s start with a picture:

Although it isn’t obvious, this picture includes the current season, part of my current obsession not to use Retrosheet data only.

So it sure looks like we’re in an age of unprecedented pitcher breakdown. And it looks like it’s on an accelrating growth path. But that’s deceiving: here’s the same data on a log scale.

For those less math-obsessed, plotting on a log scale makes constant percentage growth look like a straight line. And when we look at it this way, baseball history seems to have three eras:

  • From 1901-WWII, pitchers used seem to grow around 1.6 percent per year.
  • From 1941-1981: The Great Stasis, an average of about 15 pitchers used per year
  • From 1981-Now: pitchers used growing at about 2.2 percent per year.

Now 2.2 percent year is considerably faster than 1.6 percent per year, but isn’t that noticeable over a period of only a few decades. And when you include the fact that American League roster sizes grew from 1900-1910, there may not be much to explain at all. But I’ll proceed as if there is.

Let me propose an explanation of this pattern. There are two reasons that one team uses more pitchers than another: (a) ineffectiveness and (b) injury. Ineffectiveness, as a relative construct, can’t explain any sort of increase in the number of pitchers used since 1910, when the size of rosters in both leagues was fixed at 25, growing in September. (From 1900-1920, some of the growth will have simply been growth in roster sizes, mostly in the American League.) That leaves injury.

I can tell a story. The start of the story is that pitchers are always getting better, but the process of getting better puts more and more stress on their arms, at least for some substantial fraction of pitchers. And that fraction keeps increasing simply because the stresses on arms kept increasing. It is true that training methods have dramatically improved over the period, but the best pitchers simply employ those methods as a baseline, and press against those new, higher standards, so that the injury rate always increases, since human physigonomy is unchanged and more and more marginal (physical) pitchers are needed.

There is a second part to this story. The stresses on pitcher’s arms are so great at current levels of performance that you need more of them relative to hitters on a team. But this particular driver is gone for the moment, as MLB has fixed the number of pitchers active at any given time at 13, although minor league shenanigans assure that the true number of active pitchers in a season will inevitably be higher even with no injuries.

For the first 40 years of the 20th Century, pitchers were getting better, but “better” came with the dreaded “dead arm.” At that point, that pitcher never returned. After 1980, pitchers kept getting better, and kept having dead arms despite improvements in training. But in 1974, Frank Jobe resurrected the career of Tommy John. Pitchers still fell due to injury, but now they started to return, and the number of pitchers used not only included the loss due to injury, but the return of pitchers ready to use up their second (and third, and fourth) ulnar collateral ligament. This caused a relative increase in the number of pitchers used.

But then we need to explain what I call The Great Stasis: 1940-1980. I don’t have a great explanation for this, though I will note a few things:

  • For us old fogies, this period corresponds to the period we started watching baseball and extends back a couple of decades to give us an illusion of a permanent situation. It wasn’t. What we see now is the normal situation. What we saw back then was the anomaly.
  • The first half of this period corresponds roughly to the integration of baseball and the second half corresponds roughly with a rapid expansion of baseball (both in number of teams and length of season) and thus a rapid expansion in the number of active major league pitchers at any time. I could argue that in that situation, pitchers had the upper hand in that they didn’t need to improve quickly to keep their jobs. A competent pitcher could get a job, and in the expansion era pitchers clearly had the upper hand, with a series of changes made to the game to encourage scoring. Letting marginal pitchers and marginal batters in turned out to greatly favor pitchers. This story is far from perfect. I like it much better in the second half than the first half. I don’t think we associate integration with the explansion of pitchers relative to hitters… I think it largely goes the other way.

What do y’all think? Why (other than Great Stasis) have pitcher usages per year tended to increase? What explains the Great Stasis? Let me know.

Now look at it another way. There are roughly 1450 innings a season per team, and every one of them has to be pitched by someone. Injury should be a trivial determinant of inning per pitcher per game. Maybe we need more pitchers because they each pitch fewer innings per game. But let’s look at that figure over time.

I think it’s hard to look at this chart and see anything like eras or regimes. All I see is a steady decline in innings pitched per game. There was a brief reversal during the late 60’s when pitchers got the upperhand before they lowered the mound. The changes requiring relievers under most circumstances to face three hitters is responsible for the small blip up at the right side. It should be noted that pitchers pitching shorter and shorter shifts is not a reason why a team should need more pitchers. Five starters and seven relievers allow you to use a starter and three relievers every day (average innings per pitcher slightly above 2( and still give starters at least four days rest and relievers a day off every other day, in principle. Pitchers are definitely not getting hurt from being used too much, although I might channel Leo Mazzone and argue pitchers are getting hurt because they aren’t throwing enough between games. I don’t buy it, but this data doesn’t address that at all.

Michael Harris Splits

When last we discussed this, MH II was on a path for a historic OPS difference before and after the All Star Game. It turns out that small sample sizes are unreliable. His difference now sits at 0.286. Really good, but nowhere near Dan Uggla-historic.

The Game

Third against the Astros and Bill Anne Jan Jayden Murray. (Does anyone other than me remember 60’s celebrity game show fixture Jan Murray?) Jayden made his debut last week, and is already the best Jayden in MLB history. This was a bullpen game for the Trashcan Thumpers, who put another three rookies in the game to face the Braves. (Third is also the best Hurston in MLB history.)

Houston struck first when Zachary Cole took the first MLB pitch he had ever seen over the fence for a two-run homer. It’s a small sample size, but Zachary Cole was, by definition, at least until his second at-bat, the best hitter in MLB history per pitch seen. And that second at-bat was a run-scoring single, keeping his batting average and on-base percentage at 1.000, but lowering his slugging average from 4.000 to 2.500. On his third at-bat, he had another run-scoring single, this time against Dane Dunning who replaced Third after he’d given up 6 runs. Dunning Grybo’d 2 more to give Third a bad-Whitmanesque 8 runs yielded. But to show he was fair, he gave up another two of his own, so it was 10-0 after 5 1/2. But Cole’s skills were eroding: his OPS had fallen to 3.000. Dunning struck him out on his fourth at-bat. He obviously sucks.

Other things happened in this game: RAJ and White hit homers. Kim and Albies turned a nice force play.

Speaking of small sample sizes, after this game Third’s ERA was a respectable, but not other-wordly, 2.78.

Speaking of large sample sizes, the Braves Pythagorean won-loss record is rapidly converging to their actual won-loss record. The excess 1 run losses are getting to be pretty irrelevant.

Clemson at Tech tomorrow at noon. Later on, Whitman continues his quest to prove me wrong.