Maybe Bert Is Right (Part 2)
- Blog Post by: John Bonnes
- May 31, 2010 - 9:43 AM
(This is the second part of a 3-part series that I’ll be running on the TwinsCentric blog and at TwinsGeek.com. You can find Part 1 here. Part 3 on June 3rd.)
In Part 1, we discovered that an original metric for evaluating pitcher abuse, Pitcher Abuse Points (PAP) had been declared bunk by its creator, Rany Jayazerli. However, he and Keith Woolner instead presented another metric called PAP3 to replace it. It also starts tabulating abuse points of a pitcher at the 100-pitch mark. The evidence that it has any correlation to pitcher abuse is supposed to be in the Analyzing PAP essay, which is divided into two parts. We’ll look at the first part of that essay today.
Analyzing PAP Essay
While the initial intent of PAP was to study whether a pitcher is at risk for injury or permanent reduction in effectiveness, Woolner and Jayazerli tried to get that to happen indirectly. They broke their study into two parts. First, they studied whether there is any short-term reduction in effectiveness for pitchers after a long outing. Then, in the second part, they studied whether those high pitch counts also can predict injury.
In part one, they looked at starts for pitchers over a ten-year period (from 1988 through 1998) and look at a pitcher’s performance 21 days before and 21 days after each start. If a pitcher has a high pitch count, do the 21 days after the start reflect a decrease in performance compared to the 21 days before?
After looking at some initial results, they implemented one more filter. They only analyzed “high-endurance” starting pitchers, or pitchers whose average pitch count is above that of the league. They did this essentially so they could study the better pitchers in the league, and the ones most likely to be pushed. It also provided data that makes a little more sense.
The essay starts with a surprising result: they find a very slight decrease in performance across the board - about 1% – no matter how many pitches a pitcher throws. That is true up through 129 pitches; at 130 pitches, future performance slopes down 2% and at 140 pitches future performance dives about 5%. Those results weren’t terribly in sync with what PAP would’ve predicted, so they tried some other formulas and came up with the PAP3 curve instead.
To summarize Part 1, they found that a high pitch count can have a slight impact to a “high-endurance” pitcher’s short-term performance. That impact is about 2% if a pitcher throws upwards of 130 pitches. In what is otherwise a very candid and objective study, I’m a little disappointed by the attempt to frame this as significant:
“Assuming a fairly abusive usage pattern across a staff, a team’s starting rotation could suffer a season-wide decline of about 2%. Considering the effect on both the innings pitched (putting more strain on the bullpen) and extra runs allowed by the starting pitchers, this might amount to perhaps 20-25 runs over the course of a season, worth about 2 to 2.5 games in the standings. It’s comparable to the difference in value between Tim Hudson and Kevin Tapani or Todd Ritchie in 2000. That’s a trade worth making.”
Um, hold it. So if I allow all my pitchers to throw 130+ pitches in 162 games, I’ll decrease my staff’s effectiveness by 2%? And if I allow them to throw just 90, they’ll only decrease by 1%? And we think that’s significant, do we?
Just so we’re clear, on what "1%" is, one of the metrics that was used to measure effectiveness was runs against. So Carl Pavano (who has a significant injury history) gave up 119 runs last year while consistently throwing between 90 and 103 pitches. But if his teams would’ve allowed him to throw 130 pitches, he would’ve given up – one more run? Again, I’m supposed to think that’s a significant finding?
And, of course, this doesn’t measure what all this was supposed to measure – whether it’s actually dangerous to the pitcher. That comes next, in Part 3….
© 2015 Star Tribune